Would be nice to get a 42U rack full...
How many VPS could it host, you think? And do you think it could be profitable at $7/month?
Just a funny though; not being entirely serious here.
You're missing a RAID controller. And the chassis you've chosen can hold 8 drives.
Despite the allure of the 8-core E5-2680 processor, they're NOT worth the 400x markup over an E5-1620.
TBH, you could build three E3-1670 v2-powered nodes for roughly the same price, complete with RAID controllers and SSD cache for mechanical hard drives, and then not have all of your eggs in one basket, too.
@Damian: can't use softRAID? and doesn't that Supermicro have built in RAID?
I was aiming for crazy density... that's why I would go for RAID10 SSD. you think I could fit 1,000 VPS on there?
You think that config would fit 1000 OpenVZ VPS... that is the goal :P
That's 42,000 VPS per rack, well let's say we need 2U for switches, so 40 servers and 2 switches, so 40,000 VPS customers per rack... get 10 rack space, that's 400,000 VPS in a single cage
Is this some sort of provider orgy to see who can pack the most customers in the least resources and have the biggest kaboom when it comes crashing down?
@shovenose saving physical space is not the case here. Have you ever had a node crash? Have you seen how many clients submit tickets if so? With that many VPS's booting up as well, meh. Not worth it.
I'm just being theoretical here Not like "oh this is my plan in a week" so yeah... just trying to find something to host 1,000 VPS or 100,000 shared hosting customers LOL
I would get a nice camcorder, buy a big bowl of ACII popcorn, sit back on my recliner and hit reboot button once that node is loaded. Would be fun to watch.
Comments
You're missing a RAID controller. And the chassis you've chosen can hold 8 drives.
Despite the allure of the 8-core E5-2680 processor, they're NOT worth the 400x markup over an E5-1620.
TBH, you could build three E3-1670 v2-powered nodes for roughly the same price, complete with RAID controllers and SSD cache for mechanical hard drives, and then not have all of your eggs in one basket, too.
Using PayPal on Nightly?
OH NO
@Damian: can't use softRAID? and doesn't that Supermicro have built in RAID?
I was aiming for crazy density... that's why I would go for RAID10 SSD. you think I could fit 1,000 VPS on there?
Why don't you just get a E5-2650 (which I believe holds up to 512GB of ram).
I wouldn't even bother with that.
For 10K I could pack so much CPU and disk it would make that server look like a wrist watch.
You think that config would fit 1000 OpenVZ VPS... that is the goal :P
That's 42,000 VPS per rack, well let's say we need 2U for switches, so 40 servers and 2 switches, so 40,000 VPS customers per rack... get 10 rack space, that's 400,000 VPS in a single cage
If you're already spending $9.3k on a server, spend another $800 for an LSI 9265-8i controller :P
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118160 highly recommended
Uggh! 1000 customers on a box.
Is this some sort of provider orgy to see who can pack the most customers in the least resources and have the biggest kaboom when it comes crashing down?
Ever wonder how you are going to reboot this node and what is going to happen when you do so? Not mention, power consumption,.
@shovenose not worth it. I'd rather pick more servers instead of one big one.
Correction, e5-2650 can hold up to 750GB of ram.
With that much of ram, your biggest limiting factor will be iops.
@rsk: but why? it would save money in physcial space. imagine how many people you could host in one room?
@NHRoel that's why RAID10 in SSD
3TB of raw non RAID storage.... Way less when RAID.
1000 customers? If you give them 1GB of space each.
But then again, I'd suspect you'd use the SSDs, some of them for pretending to be RAM. The Colocrossing model.
Hm... It would give 1.5TB storage. OK well we'll need bigger SSDs
@shovenose saving physical space is not the case here. Have you ever had a node crash? Have you seen how many clients submit tickets if so? With that many VPS's booting up as well, meh. Not worth it.
You can easily find good colo deals.
I'm just being theoretical here
Not like "oh this is my plan in a week" so yeah... just trying to find something to host 1,000 VPS
or 100,000 shared hosting customers LOL
You know if you really have to ask how many VPSes your node can host I am unsure if you should be in business
Not every ssd offer same level of performance.
http://en.wikipedia.org/wiki/IOPS
First goal: 400,000 customers.
WHY???????? Split that $10k into 5 decent $2K servers... Don't try to pack 1000 VPS on one server.
I think it is a terrible idea. Nothing against @shovenose.
1U units have no real space for drives. You need SSD + big spinning disks + RAID + large RAM.
So 2U unit is minimum for such a beast. You also need real controller if not a SAN type solution in other unit space.
Me I'd avoid such a fatal learning experience. Horizontal scaling of smaller nodes is way better, way more manageable, also cost effective.
@shovenose exactly!
I would get a nice camcorder, buy a big bowl of ACII popcorn, sit back on my recliner and hit reboot button once that node is loaded. Would be fun to watch.
Hahaha @NHRoel.
Nothing about that build makes any sense. It looks like you randomly picked a bunch of components that looked impressive to you.
That. And good luck on getting 1000 IP's on one node.
lol.
IPv6 only I assume then..
Haha @AnthonySmith, no chance SolusVM doesn't support that out of the box