Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


BuyVM - Allegation of Trouble, Lies, Slabs, Hosts Servers in Basement - Page 6
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

BuyVM - Allegation of Trouble, Lies, Slabs, Hosts Servers in Basement

13468923

Comments

  • jarjar Patron Provider, Top Host, Veteran
    edited January 2014

    @Spirit said:
    @jarland would as example vigorously attack BuyVM right this moment ;-)

    Just saying.

    Interesting. You are correct that I would be quicker to jump on people who have repeatedly lied in front of my face for the last 2 years as opposed to one of the most generous people I've ever met. I'll give you that. It's kind of like how you'd easily attack some idiot at the market a lot faster than you'd attack your mother, assuming you had a good relationship with your mother. Of course people require more to jump on someone they know that has been good to them than they do to jump on someone who they have repeatedly watched treat people like trash for 2 years.

    But if you think just being in colocrossing means I'll jump on someone for anything, anytime they're mentioned, go tell me how many times I've given BlueVM shit here. Be fair about it if you're going to go there.

    Thanked by 1Mitsuhashi
  • @Francisco said:
    We had our shared SQL on there and it was falling apart even with a full 8 disk array to itself. I don't know if the xen drivers were crappy in .18 or what but we soon after moved it onto an OVZ on a L5420.

    The architecture Xen uses (hypervisors dispatching ISRs to dom0/domU through event-channels) adds additional RTT latencies which expose race conditions in a lot of Linux's drivers.

    Part of the big work with paravirt_ops was to create faster locks to tighten the race conditions -- so-called hypervisor-assisted locks, or pvlocks/pvticketlocks. This has mostly solved the problem, but again, a lot of bitchy controllers like adaptec's are not very tolerant to the higher RTT latencies.

  • @jarland said:
    1 & 3 sound like theoretical conversation, 2 is interesting but lacks context. He said he helped you with your nodes, you didn't deny that, so is he talking about buyvm nodes or yours?

    I'm just giving you what I have as I go along, at some point I'll get the log everyone here is dieing for. Either way, you can take it how you please, my interest here is not to turn anyone against BuyVM/Francisco, it's to kindly nudge him in the truthful direction, it may take some prodding but we'll get there.

    Let me correct you, Fran gave us a 8GB Vmware slab when we had to move out of Codero, 1 slab, not 2, nor 3, not more. The fact that he gave us a full 8G slab on Vmware proves by itself that he allocated more than 8G to a physical node, do you think he put just us on that physical node and was losing money? Let's be realistic here.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @123Systems

    Again, incorrect timeline. The only time we had 16GB boxes was when we bought the gear after the first 5 nodes. We tried vmware on them only to find that they fell over too easily.

    "The great build out" was the end of that all. For about 3 months leading up to it the L5420's had maybe 50 users each due to contention issues.

    Notice I was still talking about RO's. We told most of our RO's to take a hike when they started bringing floods.

    I/O oversell is always the noose in any setup. RAM isn't a big deal if you know your users like the back of your hand.

    @kaniini - Learn something new every day :) As I said, I never liked XEN all that much. VMWARE had a lot of problems where vmware-tools would either not compile correctly against new kernels or would just...stop working and things grind hard. SharedSQL (the RO version, not the buyvm+ edition) was on vmware but had major iowait issues, even at pretty small IOPs (~100iops).

    This was all back in ESX 4, though.

    Francisco

  • jarjar Patron Provider, Top Host, Veteran

    123Systems said: I'm just giving you what I have as I go along, at some point I'll get the log everyone here is dieing for

    Cool, look forward to some good reading :)

  • CVPS_ChrisCVPS_Chris Member, Patron Provider
    edited January 2014

    @Francisco, How many physical nodes did you have in CC DC in Buffalo and how many were in Batavia.

  • @jarland said:
    1 & 3 sound like theoretical conversation, 2 is interesting but lacks context. He said he helped you with your nodes, you didn't deny that, so is he talking about buyvm nodes or yours?

    As I recall it, Francisco helped 123Systems out a little back when Andrew was targeting the eAthena/rAthena community (Ragnarok Online server reimplementation) for VPS targeted at that sort of thing. eAthena does really well with memory deduplication because a lot of the data held in memory is large tables and maps. Everyone involved in that type of hosting was doing it by then.

    Thanked by 2jar vRozenSch00n
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @CVPS_Adam said:
    Thanks Francisco. Where in Batavia, and which plan did you use? http://www.verizon.com/smallbusiness/products/business-FiOS-Internet/packages/

    I can't remember which plan it was but with the 3 lines we bonded we had like ~125mbit.

    IT wasn't in any facility, it was just in one of Aldryic's family members places. It wasn't a pretty setup, just a rack we mounted in.

    Francisco

  • @CVPS_Chris said:
    Francisco, How many physical nodes did you have in CC DC in Buffalo and how many were in Batavia. What carrier did you use in Batavia?

    Looking for a new business model?

  • CVPS_ChrisCVPS_Chris Member, Patron Provider

    @kaniini, no I am curious if he can tell the truth

  • @Francisco said:
    123Systems

    ALL of the chat logs I have given thus far are from 2010, I even have 2010 logs from where you gave us the Vmware slab.

    http://web.archive.org/web/20120104183534/http://v2.lowendtalk.com/questions/11038/the-great-buyvm-build-out

    You didn't do the build out until 2011.

  • NickMNickM Member
    edited January 2014

    Edit: Nevermind.

    Thanked by 2laaev Spencer
  • jarjar Patron Provider, Top Host, Veteran
    edited January 2014

    @CVPS_Chris said:
    kaniini, no I am curious if he can tell the truth

    Do you have a reference for the truth? Like a value to compare?

    Thanked by 2MannDude Darwin
  • trewqtrewq Administrator, Patron Provider
    edited January 2014

    @Francisco said:

    Only 125mbps? I think I miss judge the amount of bandwidth actually required.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @kaniini said:
    As I recall it, Francisco helped 123Systems out a little back when Andrew was targeting the eAthena/rAthena community (Ragnarok Online server reimplementation) for VPS targeted at that sort of thing. eAthena does really well with memory deduplication because a lot of the data held in memory is large tables and maps. Everyone involved in that type of hosting was doing it by then.

    Incorrect. Andrew was working with a fellow named 'Ben' on a brand back then. That brand went deadpool.

    We used small boxes (Q6600 w/ 8GB RAM usually) for the EATHENA boxes and just use virtuozzo. Virtuozzo did their version of vswap which helped. I don't think vmware 3.5 did dedup at the time since the new CPU's weren't out that helped with transparent pages.

    @CVPS_Chris said:
    Francisco, How many physical nodes did you have in CC DC in Buffalo and how many were in Batavia.

    Lets see...

    Router, 2 KVM's and 5 OVZ nodes were in 350 as well as an NFS box. There was also a force10 switch.

    There was 14 - 15 over in Batavia.

    Francisco

  • CVPS_ChrisCVPS_Chris Member, Patron Provider

    @NickM Im sure he didnt disclose that to his customers hahaha "By the way you guys are hosted out of a basement". Very transparent there.

  • NickM said: hosting customer servers in a home basement

    My thoughts exactly.

  • jarjar Patron Provider, Top Host, Veteran

    @CVPS_Adam said:
    My thoughts exactly.

    I know, right? The lies.

    What's up Kevin?

  • @trewq said:
    Only 125mbps? I think I miss judge the amount of bandwidth actually required.

    It could work, if you did your filtering on the ColoCrossing side of the link.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @trewq said:

    It was about that, I'm sure it could spike higher. Inbound was a lot quicker since it was 3 lines and worked fine.

    @CVPS_Chris said:
    NickM Im sure he didnt disclose that to his customers hahaha "By the way you guys are hosted out of a basement". Very transparent there.

    You're right, and I'll take my lumps for that.

    @123Systems said:

    Yes, and a lot of the 2nd wave boxes were very hit/miss, which is why we didn't push it much. We left ourselves sold out for longer periods during it because we had nothing but headaches.

    When we moved the racks to SJC we more or less had a rack of empty nodes that we repurposed later for KVM.

  • trewqtrewq Administrator, Patron Provider

    @Francisco said:
    It was about that, I'm sure it could spike higher. Inbound was a lot quicker since it was 3 lines and worked fine.

    Fair enough. If it works, it works :)

  • Francisco said: You're right, and I'll take my lumps for that.

    When customers wondered why their throughput sucked, because you had them on a 50mbit home business connection... did you blame ColoCrossing? Considering 75% of your NY presence was in a home basement, and 25% was in a real datacenter (ColoCrossing)

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @trewq said:

    Not really. The setup we had wasn't like, in some damp basement all sketchy. It was in an office space that had a nice AC unit, well vented and I had a a few UPS units to help. It wasn't amazing but it did the trick while we figured things out.

    We had talked with Jon before doing this to get more power but after how some meetings had gone, no one in the company was going to approve it.

  • @CVPS_Adam said:
    and 25% was in a real datacenter (ColoCrossing)

    No offense, but that datacenter is in a mall. I don't think you can really make that argument.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    CVPS_Adam said: When customers wondered why their throughput sucked, because you had them on a 50mbit home business connection... did you blame ColoCrossing? Considering 75% of your NY presence was in a home basement, and 25% was in a real datacenter (ColoCrossing)

    No because they never had issues. Again, users were balanced based on their work load at the time.

    The only time we ever had speed issues at CC was surprisingly when things moved to L3. That was comcast related, though, so we just put it off as "well that's comcast".

    Honestly, all of our Choopa setup only peaks like 300mbit/sec, total. That includes the slew of users that use us for proxying of sorts (filtering, netflix, etc).

  • jarjar Patron Provider, Top Host, Veteran
    edited January 2014

    Now I feel like I'm missing something here. Like I walked in on a conversation that started before I got here. So the BuyVM buffalo nodes weren't all in CC or even Buffalo?

    Thanked by 1laaev
  • CVPS_ChrisCVPS_Chris Member, Patron Provider

    @kaniini said:
    No offense, but that datacenter is in a mall. I don't think you can really make that argument.

    Have you been inside? No. Its as much of a real data center as any other data center Ive been in all over the country. In fact its actually nicer than many I have been in. Its not even an actual mall. You are talking one floor out of 26 floors. Some of Buffalo's most prestigious business' have office space in that building..... you guys make me laugh.

  • upfreakupfreak Member
    edited January 2014

    @jarland said:
    Now I feel like I'm missing something here. Like I walked in on a conversation that started before I got here. So the BuyVM buffalo nodes weren't all in CC or even Buffalo?

    Its almost time you asked for your chocolates..

    Thanked by 2jar netomx
  • mpkossenmpkossen Member
    edited January 2014

    I knew it :-D

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @jarland said:
    Now I feel like I'm missing something here. Like I walked in on a conversation that started before I got here. So the BuyVM buffalo nodes weren't all in CC or even Buffalo?

    Right. Batavia is like...10 - 15 minutes away from Buffalo? I honestly didn't count the minutes when we drove for pickup.

    What ended up happening was that when we started first talking to Jon we were all really happy with things. Jon mentioned that he had a job opening if I knew anyone skilled that would like to be in Buffalo. Aldryic's wife is from the area so it'd work.

    I put Aldryic's name forward and Ald applied. Things didn't go how anyone wanted on either side and in the end it caused a lot of pissed off feelings

    By then we had a stack of gear built and I wasn't going to go try to find another DC to work with.

    Francisco

This discussion has been closed.