New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
It's not. There are different methods as well.
Yes. If 50 people stream the same file, the data would theoretically be fetched into the cache, and read from the slow disk storage only once.
Great for media servers, download sites, torrents etc…
This is most likely not a ZFS setup - so caching algorithms will work totally differently.
It is a huge difference, mainly in latency. Infiniband is superiour to Ethernet, even if on paper they may have same maximum speed.
For basic cheap backup, raid-6 (like Hetzner Storagebox) or even no raid at all (like the now-scarce SYS ARM storage servers). What I'm inferring from slabs' use of raid-10 etc. is that they are built as a more versatile product than that. BuyVM's users have all kinds of applications in mind besides backup, so this is great for them.
You are wrong in that part we have like 21.98 GBytes by iperf test in Black Storage so since fran using 40Gbit Infiniband So he have like 11 to 20 GBytes depend on what kind card is by iperf
Is there any way to get additional cores without the memory and other things? GlusterFS really likes threads. Do the servers support private networking? I'd like to separate that traffic from my external network.
So we can use it as a regular filesystem, right?
Don't think it's possible. Would love a 4GB plan with 50% of 2 cores, but I think the plans are pretty set in stone.
Correct. I have mine formatted as ext4 then set to mount on boot.
Just ordered three VMs with nine slabs, going to do raid-z across three drives on three servers running GlusterFS, excited to see how it turns out Might even try an LVM stripe instead of RAID for kicks!
You'll be limited to 1gbit or so since that's the network port.
As mentioned I'm aiming to get the private lan over the infiniband fabric but will need some extra work before that's possible.
Francisco
Ah, gotcha! No worries. Is it a separate gigabit allocation on another interface? If so, that'll be perfectly fine (basically what I run everywhere else).
Cool to hear that's being looked into! We've done something similar where I work (not RDMA, something proprietary) but I understand how involved it can prove to be.
@Francisco
Are you having trouble with orders and or activating the slabs? I ordered another slab last night and has been stuck in pending now for ~ 12 hours.
This equipment is very pricey. I don't think that small-medium companies can afford this.
And also I don't see any economy there though too, if compare to classic storage. Maybe renting place is cost a lot there, who knows... I don't understand the whole business model related to storage too, it's, I think, not possible to sell this storage so cheap with using such pricey technologies right now. Looks like or used cheaper equipment or someone in long-term will be dead pulled or will start selling some of his body parts for paying debs.
Billing is 9-5. Karen is on handling things now, sorry about that!
Francisco
On the contrary, stuff for 40gig (QDR) Infiniband is very cheap, since it has been obsolete for quite some time - just look on eBay.
100gig (EDR) Infiniband is the modern standard - and for that, the equipment is indeed pretty pricey.
In the eyes of the VPS it's just like any other harddrive.
You can format it however you want with whatever you want. Or if you're a baller you can go write an application that interacts at the block level.
Really, it's entirely up to you.
If anyone needs help formatting/partitioning, just ticket and I'll sort you out.
Francisco
You're right, I got confused, thank you for clarification ^_~
Awesome! Will order asap when NY slice is available :-)
Of course, it's the latency, but what I'm saying is that if your application requires a resource from the Internet (like Plex) then the latency argument is tossed.
Yep. Just depends on if you want parity and where you want to place that calculation in the pipeline.
The conversion numbers are below, but since Fran said each block user is locked to 1G, you're limited to a mechanical hard drive with the head at the outer edge of the platter performance with NVME seek times.
(Edited the table below, forgot to factor in 8b/10b encoding.)
1G = 125MB/s
10Gbits (8Gbits) = 1.25GB/s (1GB/s)
40Gbits (32Gbits) = 5GB/s (4GB/s)
56Gbits (40/54Gbits depending on link type) = 7GB/s (5GB or 6.75GB/s)
NY/LU will be next year at this point.
Waiting to get a good feel for how the platform works so we can make whatever improvements we need before I ship gear to remote locations.
Francisco
Think that was in reference to the guy setting up a storage cluster that would use the public network between VMs. Block storage isn't limited to 1Gbps, easily pushing 500-700+MB/s
Next year as in January?
That sounded pretty crazy though, because of the cross site latency. Was it just a "prove it can be done" thing? Or does it make more sense than it sounds?
No really since using QDR Comparison to FDR and EDR.
Yes It something like
QDR 40gig
FDR 56 GIg
EDR 100 Gig
Sounds like he's doing GlusterFS all within Vegas. Whether or not it makes sense: no clue. Probably more of a proof of concept thing than a production setup if I had to guess.
Got my GlusterFS+ZFS cluster deployed! Three 2GB VMs with 3 1TB slabs each in RAID-Z. 4TB of usable space out of 9TB. I can lose up to one drive in each array, and one whole node before worrying about data loss! I can add more space and redundancy with more nodes/slabs too
Now all I need is more network bandwidth
For those curious, setting this up in a cross-region setup does make sense. GlusterFS has geo-replication which allows you to do automatic transfers between zones I believe in realtime (with degraded performance) or on a schedule. I haven't deployed it yet, but I probably will when the slabs are available elsewhere.
It doesn't work this way. All the drives are virtual - so all 9 of your slabs are served from the same server and same set of drives anyway…
@jilay - so whats your opinion/feedback on the product so far? Liking it?
Francisco
No.
Our storage is clustered so while volumes don't replicate between nodes, there's 5 large nodes, each with multiple arrays, that make up the cluster as a whole. Users are spread out over everything.
Francisco
Fran mentioned a storage cluster, so I'd assume there are multiple nodes serving slabs. I don't have a way to confirm this, but I'd assume so. Also, they're using RDMA so it's not a far jump to believe there's multi-pathing involved somewhere.
Edit:
He confirmed, see above. In this scenario, what I said is true. Assuming all of my slabs are reasonably spread across their storage nodes and arrays, this setup is pretty resilient. I'd like to have a little more usable space, so I may switch to simply striping the drives. However this would only really survive losing one VM of three or two of five.
It's working well! Will need to give it some time and see how ZFS likes the drives and how the internal network works over time So far it's going well, eager for the days where there's more internal bandwidth!
I'll keep everyone posted on how that goes
Francisco