New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
BuyVM - Allegation of Trouble, Lies, Slabs, Hosts Servers in Basement
This discussion has been closed.
Comments
Just saying.
Interesting. You are correct that I would be quicker to jump on people who have repeatedly lied in front of my face for the last 2 years as opposed to one of the most generous people I've ever met. I'll give you that. It's kind of like how you'd easily attack some idiot at the market a lot faster than you'd attack your mother, assuming you had a good relationship with your mother. Of course people require more to jump on someone they know that has been good to them than they do to jump on someone who they have repeatedly watched treat people like trash for 2 years.
But if you think just being in colocrossing means I'll jump on someone for anything, anytime they're mentioned, go tell me how many times I've given BlueVM shit here. Be fair about it if you're going to go there.
The architecture Xen uses (hypervisors dispatching ISRs to dom0/domU through event-channels) adds additional RTT latencies which expose race conditions in a lot of Linux's drivers.
Part of the big work with paravirt_ops was to create faster locks to tighten the race conditions -- so-called hypervisor-assisted locks, or pvlocks/pvticketlocks. This has mostly solved the problem, but again, a lot of bitchy controllers like adaptec's are not very tolerant to the higher RTT latencies.
I'm just giving you what I have as I go along, at some point I'll get the log everyone here is dieing for. Either way, you can take it how you please, my interest here is not to turn anyone against BuyVM/Francisco, it's to kindly nudge him in the truthful direction, it may take some prodding but we'll get there.
Let me correct you, Fran gave us a 8GB Vmware slab when we had to move out of Codero, 1 slab, not 2, nor 3, not more. The fact that he gave us a full 8G slab on Vmware proves by itself that he allocated more than 8G to a physical node, do you think he put just us on that physical node and was losing money? Let's be realistic here.
@123Systems
Again, incorrect timeline. The only time we had 16GB boxes was when we bought the gear after the first 5 nodes. We tried vmware on them only to find that they fell over too easily.
"The great build out" was the end of that all. For about 3 months leading up to it the L5420's had maybe 50 users each due to contention issues.
Notice I was still talking about RO's. We told most of our RO's to take a hike when they started bringing floods.
I/O oversell is always the noose in any setup. RAM isn't a big deal if you know your users like the back of your hand.
@kaniini - Learn something new every day As I said, I never liked XEN all that much. VMWARE had a lot of problems where vmware-tools would either not compile correctly against new kernels or would just...stop working and things grind hard. SharedSQL (the RO version, not the buyvm+ edition) was on vmware but had major iowait issues, even at pretty small IOPs (~100iops).
This was all back in ESX 4, though.
Francisco
Cool, look forward to some good reading
@Francisco, How many physical nodes did you have in CC DC in Buffalo and how many were in Batavia.
As I recall it, Francisco helped 123Systems out a little back when Andrew was targeting the eAthena/rAthena community (Ragnarok Online server reimplementation) for VPS targeted at that sort of thing. eAthena does really well with memory deduplication because a lot of the data held in memory is large tables and maps. Everyone involved in that type of hosting was doing it by then.
I can't remember which plan it was but with the 3 lines we bonded we had like ~125mbit.
IT wasn't in any facility, it was just in one of Aldryic's family members places. It wasn't a pretty setup, just a rack we mounted in.
Francisco
Looking for a new business model?
@kaniini, no I am curious if he can tell the truth
ALL of the chat logs I have given thus far are from 2010, I even have 2010 logs from where you gave us the Vmware slab.
http://web.archive.org/web/20120104183534/http://v2.lowendtalk.com/questions/11038/the-great-buyvm-build-out
You didn't do the build out until 2011.
Edit: Nevermind.
Do you have a reference for the truth? Like a value to compare?
Only 125mbps? I think I miss judge the amount of bandwidth actually required.
Incorrect. Andrew was working with a fellow named 'Ben' on a brand back then. That brand went deadpool.
We used small boxes (Q6600 w/ 8GB RAM usually) for the EATHENA boxes and just use virtuozzo. Virtuozzo did their version of vswap which helped. I don't think vmware 3.5 did dedup at the time since the new CPU's weren't out that helped with transparent pages.
Lets see...
Router, 2 KVM's and 5 OVZ nodes were in 350 as well as an NFS box. There was also a force10 switch.
There was 14 - 15 over in Batavia.
Francisco
@NickM Im sure he didnt disclose that to his customers hahaha "By the way you guys are hosted out of a basement". Very transparent there.
My thoughts exactly.
I know, right? The lies.
What's up Kevin?
It could work, if you did your filtering on the ColoCrossing side of the link.
It was about that, I'm sure it could spike higher. Inbound was a lot quicker since it was 3 lines and worked fine.
You're right, and I'll take my lumps for that.
Yes, and a lot of the 2nd wave boxes were very hit/miss, which is why we didn't push it much. We left ourselves sold out for longer periods during it because we had nothing but headaches.
When we moved the racks to SJC we more or less had a rack of empty nodes that we repurposed later for KVM.
Fair enough. If it works, it works
When customers wondered why their throughput sucked, because you had them on a 50mbit home business connection... did you blame ColoCrossing? Considering 75% of your NY presence was in a home basement, and 25% was in a real datacenter (ColoCrossing)
Not really. The setup we had wasn't like, in some damp basement all sketchy. It was in an office space that had a nice AC unit, well vented and I had a a few UPS units to help. It wasn't amazing but it did the trick while we figured things out.
We had talked with Jon before doing this to get more power but after how some meetings had gone, no one in the company was going to approve it.
No offense, but that datacenter is in a mall. I don't think you can really make that argument.
No because they never had issues. Again, users were balanced based on their work load at the time.
The only time we ever had speed issues at CC was surprisingly when things moved to L3. That was comcast related, though, so we just put it off as "well that's comcast".
Honestly, all of our Choopa setup only peaks like 300mbit/sec, total. That includes the slew of users that use us for proxying of sorts (filtering, netflix, etc).
Now I feel like I'm missing something here. Like I walked in on a conversation that started before I got here. So the BuyVM buffalo nodes weren't all in CC or even Buffalo?
Have you been inside? No. Its as much of a real data center as any other data center Ive been in all over the country. In fact its actually nicer than many I have been in. Its not even an actual mall. You are talking one floor out of 26 floors. Some of Buffalo's most prestigious business' have office space in that building..... you guys make me laugh.
Its almost time you asked for your chocolates..
I knew it :-D
Right. Batavia is like...10 - 15 minutes away from Buffalo? I honestly didn't count the minutes when we drove for pickup.
What ended up happening was that when we started first talking to Jon we were all really happy with things. Jon mentioned that he had a job opening if I knew anyone skilled that would like to be in Buffalo. Aldryic's wife is from the area so it'd work.
I put Aldryic's name forward and Ald applied. Things didn't go how anyone wanted on either side and in the end it caused a lot of pissed off feelings
By then we had a stack of gear built and I wasn't going to go try to find another DC to work with.
Francisco