Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


BuyVM Block Storage Slabs! $1.25/mo for 256GB! $5.00/mo for 1TB! CN2 GIA also available!
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

BuyVM Block Storage Slabs! $1.25/mo for 256GB! $5.00/mo for 1TB! CN2 GIA also available!

FranciscoFrancisco Top Host, Host Rep, Veteran
edited November 2018 in Offers

Heyo!

Its been a year since we had our public feedback request for our block storage offerings and are happy to announce that Soon, has finally come! We've only rolled out Vegas for the time being but now that the initial launch is ready, NY & LU will go much easier.


BLOCK STORAGE SLABS!

  • $1.25/month per 256GB.
  • Increases in increments of 1TB after the 1st 1TB.
  • Maximum of 10TB per Volume.
  • Maximum of 8 Volumes attached to a single Virtual Server.
  • Storage is NVME cached for both reads and writes (over 10TB of active caching!)
  • Storage is connected to all nodes over Infiniband RDMA to give local storage like performance!

ORDER BLOCK STORAGE SLABS HERE!


ALL SLABS REQUIRE A SLICE TO OPERATE!

SLICE 1024
- ¼ Core @ 3.50+ GHz
- Fair Share CPU Usage
- 1024 MB Memory
- 20 GB SSD Storage
- 1000Mbit Unmetered Bandwidth!
- 1 IPv4 Address

$3.50/month - Las Vegas / New York / Luxembourg

SLICE 2048
- ½ Core @ 3.50+ GHz
- Fair Share CPU Usage
- 2048 MB Memory
- 40 GB SSD Storage
- 1000Mbit Unmetered Bandwidth!
- 1 IPv4 Address

$7.00/month - Las Vegas / New York / Luxembourg


CN2 GIA IP addresses are also available for $3.00/month on all SLICE 2048's or higher.


We'd like to personally thank all of the beta testers that have taken part in this product roll out.

Francisco

«134567

Comments

  • Thanks @Francisco. Well worth the wait.

    How much storage did you build out? (I understand if you don't want to say for competitive reasons.)

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited November 2018

    Weblogics said: How much storage did you build out? (I understand if you don't want to say for competitive reasons.)

    We have 500TB sellable.

    There's more than that provisioned but we had a lot of old storage customers that we converted over to this setup.

    The cluster and performance you'll experience is from an already very busy cluster, so everything's well burned in.

    Francisco

    Thanked by 1willie
  • Great for setting up object storage!

    Thanked by 1Francisco
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @varunchopra said:
    Great for setting up object storage!

    Yep :) No problem shoving minio on it and enjoying.

    Francisco

    Thanked by 1ferri
  • What happens with the beta slices? Can we convert them to paid slices, or do we buy new slices and migrate the data?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @willie said:
    What happens with the beta slices? Can we convert them to paid slices, or do we buy new slices and migrate the data?

    Your choice.

    Francisco

    Thanked by 1willie
  • Good to see it was finally launched. Waiting for them to appear in NY so I can migrate my old storage server. :smiley:

    Thanked by 1Francisco
  • gestiondbigestiondbi Member, Patron Provider

    Valid with any slice plan?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @gestiondbi said:
    Valid with any slice plan?

    Yes sir, assuming it's in Vegas.

    Francisco

  • @Francisco

    Question on upgrading the slab size.

    For example, a 500 GB to 1 TB size? Once the 1 TB slab is ordered, does the 500 GB slab get auto resized or does it need to be deleted then the new 1 TB attached?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Weblogics said:
    @Francisco

    Question on upgrading the slab size.

    For example, a 500 GB to 1 TB size? Once the 1 TB slab is ordered, does the 500 GB slab get auto resized or does it need to be deleted then the new 1 TB attached?

    You would upgrade the volume in billing, you can pick a new size.

    The volume would be grown on our end and then you use parted/etc to resize the partition/filesystem.

    Francisco

    Thanked by 1Weblogics
  • Time for LU launch?

  • Great!
    Does it have redundancy to prevent data loss?
    Could it be expanded?
    Could it be mounted to Windows VM?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @hiphiphip0 said:
    Great!
    Does it have redundancy to prevent data loss?
    Could it be expanded?
    Could it be mounted to Windows VM?

    Yes.

    Francisco

  • This is good, what redundancy method do you use?

    Use this with B2 to backup will be amazing :smiley: .

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @newzth said:
    This is good, what redundancy method do you use?

    Use this with B2 to backup will be amazing :smiley: .

    RAID10.

    Francisco

  • newzth said: Use this with B2 to backup will be amazing :smiley: .

    This is actually as cheap as (cheaper than?) B2 and far more functional.

  • Can't wait for this at LUX, how many iops are allocated?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @lurch said:
    Can't wait for this at LUX, how many iops are allocated?

    We aren't doing any hard caps at the moment.

    I'm hoping to keep it that way where we just punish the people that decided they needed a 256GB swap file and are running a massive SQL server on it.

    @willie said:

    newzth said: Use this with B2 to backup will be amazing :smiley: .

    This is actually as cheap as (cheaper than?) B2 and far more functional.

    The VPS makes it more expensive, but you can do whatever you want with it, not just backups.

    Francisco

    Thanked by 1lurch
  • v3ngv3ng Member, Patron Provider

    Looks awesome! But I'd appreciate smaller slices e.g. 512MB RAM.

    Thanked by 1vimalware
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @v3ng said:
    Looks awesome! But I'd appreciate smaller slices e.g. 512MB RAM.

    We're not interested in splitting the plan any further, sorry!

    Francisco

    Thanked by 1JTR
  • @Francisco

    Is this iSCSI backend connecting to a zpool or HW RAID? I ask because running ZFS on ZFS is not recommended and should be mentioned to your clients.

    10TB caching is quite excessive since it will be mostly empty by your users patterns. If you don't have the RAM to back up the amount of disk caching data pointers, performance will also suffer.

    I hope you didn't enable deduplication.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @techhelper1 said:
    @Francisco

    Is this iSCSI backend connecting to a zpool or HW RAID? I ask because running ZFS on ZFS is not recommended and should be mentioned to your clients.

    10TB caching is quite excessive since it will be mostly empty by your users patterns. If you don't have the RAM to back up the amount of disk caching data pointers, performance will also suffer.

    I hope you didn't enable deduplication.

    The only part you got right in this whole post is the 10TB of cache. The cache is slow populating so thrashing isn't a major concern but it helps greatly for someone wacking at a database server or things like that. The person deciding to download cachefly won't really hit the cache and will go straight to rust.

    There's 1.25TB of RAM in the storage cluster for read buffering as well, which will take even more strain off the NVME caching layer.

    Francisco

  • Francisco said: The only part you got right in this whole post is the 10TB of cache.

    I'd hope there is some form of data protection. :wink:

    For database use, it'd make more sense to use your shared SQL offering, which would allow more of your clients to spin up more VMs to interact with the same database.

    Francisco said: The person deciding to download cachefly won't really hit the cache and will go straight to rust.

    How would you know that for sure? Everything that is initially read or written to this storage solution is going to be cached, and depending on your deduplication setup (if it's in use) will just confirm the writes, while on read it'll just blast it over the wire.

    Francisco said: There's 1.25TB of RAM in the storage cluster for read buffering as well, which will take even more strain off the NVME caching layer.

    With that kind of RAM in place, I highly doubt your disk caching will come into play at all, and if it does, it's only for a short period of time.

    In the end, I just see this as a cheap backup storage solution.

  • I think Fran would have thought about this and tested a lot before releasing it and I wouldn't have any doubts using this service.

  • @lurch I'm sure it has been tested and the overall product works, I have 0 problems with that.

  • techhelper1 said: In the end, I just see this as a cheap backup storage solution.

    I think it's overkill for that and Fran is going for higher performance. The previous storage product showed that is needed since people used it as media servers etc.

    I think there is still an unfilled niche for cheap backup storage in the US, along the lines of Hetzner's Storage Box which is about due for a price drop. That's basically no frills raid-6 storage with scp access (no vps). Slabs are a higher end product with raid 10 and lots of caching and block device interface etc. But even with all that, it's still priced competitively with more minimal solutions.

  • techhelper1techhelper1 Member
    edited November 2018

    willie said: Slabs are a higher end product with raid 10 and lots of caching and block device interface etc.

    RAID 10 is required for redundancy and speed, what else would you expect?


    For media streaming and backups, do you really think NVME and 10TB+ worth of caching on the front end is required? I certainly don't. I maybe see 4TB used but for purposes outside what you have mentioned.

    I have personally done tests with ZFS where I can have 20 spinning disks and only have the ARC and L2ARC PCI-E caching. I don't see a need for a big cache in write events when the data is regularly flushed out to the disks every 5 seconds or so (which is about 50GB of data with a redundant 40Gbit setup).


    What does it change if it's a block device or accessible via the network connection? If you're running a media server over the Internet, you're still capped at the 1G speed or whatever is available on the servers bridge.

  • HarambeHarambe Member, Host Rep

    Torrents were really the issue with the old storage plans, dozens of VMs just slamming I/O. I get the feeling that Fran likes to overbuild these sorts of things on purpose, and/or has use cases beyond what's currently available in mind for these storage arrays.

  • Torrents or not, the slamming I/O problem will always exist no matter if the storage is local or remote. The big issue is one user can consume all I/O of a node (local or remote) and make others suffer. The remote side will suffer longer because of hardware upgrades to get more I/O, and if downtime is required, you'll lose access to your data and may not be able to recover automatically.

Sign In or Register to comment.