Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


BuyVM - Allegation of Trouble, Lies, Slabs, Hosts Servers in Basement
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

BuyVM - Allegation of Trouble, Lies, Slabs, Hosts Servers in Basement

ja351ja351 Member
edited July 2014 in General

After @Francisco at BuyVM has repeatedly denied slabbing their vps looks like its time for some truth thanks to @kaniini script which detects if an OpenVZ container is inside another virtual machine.

Time for some explaining @Francisco, this is extremely dishonest.

root@ovz:~/slabbed-or-not-0.1.1# ./slabbed-or-not

Hypervisor: KVM


EDIT!! Here is a summary of what has went on in this thread and the new things we have learned about BuyVm !!

  1. BuyVM has problems with numerous past datacenters regarding throughput, always blamed upstreams

  2. BuyVM sells service in Buffalo (bait and switch), but it's actually hosted 45 miles away in a small town called Batavia on a FIOS connection

  3. BuyVM stole the code for Solus to make Stallion

  4. BuyVM always insisted nothing was slabbed, turns out everything, or nearly everything, is slabbed

  5. BuyVM has hosted servers in another company's rack, exposing all of their clients to security risk. The guy later stole all of the hard drives from BuyVM's servers.

  6. BuyVM "potentially" had Fran's best friend, Matt, who worked for EGI enter a ColoCrossing cabinet in San Jose and then posted it online.

  7. BuyVM stole the code for cPanel for BuyVM Plus (BuyVM+)

Source (Page 5) http://lowendtalk.com/discussion/comment/458606/#Comment_458606

Thanked by 2mpkossen CNSjack
«13456723

Comments

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Well, not quite, Adam.

    Some nodes we had to with the latest batch of SSD upgrades because the adaptec 7805 drivers wouldn't work properly on 2.6.32.

    I lost multiple days during my trip to Vegas in November trying to get them to run stable and perform properly but the only way achieve that was to use a newer kernel.

    I don't like it but moving to LSI wasn't an option as I don't personally know those cards worth a damn. I've been able to save multi drive raid failures on adaptecs simply because I know how they handle things as well as whatever little tricks they allow in their BIOS. Going to LSI would have put us in a bad spot if we ever ran into issues.

    But thanks :)

    Francisco

  • concerto49concerto49 Member
    edited January 2014

    I totally don't get this noise. Why do we care?

    Look: does the VPS perform as expected? Are there issues?

    If no, does how it's implemented matter?

    You want a rely platform - not if it really is slabbed, has 1 CPU or 2 CPUs.

    If there are performance or reliability issues raised it with the host, otherwise someone's just causing drama.

    Thanked by 1joepie91
  • @concerto49 transparency does not matter?

    Thanked by 1BlueVM
  • manacitmanacit Member
    edited January 2014

    This transparency word gets thrown around a bunch on here, the fuck does it really mean? BuyVM is probably one of the most TRANSPARENT hosts on here anyway (look at that honest answer you just got).

    If you're happy with the performance, it really doesn't matter does it?

  • @Rockster said:
    concerto49 transparency does not matter?

    I don't see how this is a matter of transparency? Are they not explaining the hardware used? It's not BuyVM specific. If we hard to go this far - should they list every software and driver in use? That's what you're asking here.

    Again, is it slower? Is it not reliable?

    Would you like to know which exact version of OpenVZ is currently being in use? Does Transparency not matter?

    Sure, we need to be transparent, but how far would you like to go? Should be list the bios version too?

    Thanked by 1Mark_R
  • With that being said, everything else is big hardware.

    http://lowendtalk.com/discussion/comment/267897/#Comment_267897

    http://lowendtalk.com/discussion/comment/267968/#Comment_267968

    You stated none of your nodes were slabbed !!

    Thanked by 2mpkossen marrco
  • Trying to understand this ..

    So the NODE is on Kernel X and hosting another machine that is holding OpenVZ containers for clients? and this was done so the new drivers would work on the underlying containers this way clients would be able to use the SSD drives?

    If what I said is right .. it's genius .. nothing wrong with that..

  • ja351 said: You stated none of your nodes were slabbed !!

    I'm going to go ahead and say the dates of those posts versus the timeframe given in his post means there's not evidence to prove he was lying then.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @concerto49 said:
    Sure, we need to be transparent, but how far would you like to go? Should be list the bios version too?

    Honestly we've done it before when we had issues with the 6805's (again, too new of a driver for the .18 kernel and it was hacked in).

    We try to be as transparent and honest as we can, as well as do right by everyone as best we can.

    Francisco

  • I'm not sure why it's a big deal. If your VPS is doing well, what's the issue?

    Thanked by 1Mark_R
  • RocksterRockster Member
    edited January 2014

    manacit said: (look at that honest answer you just got)

    What makes you think it's honest? I believe Chris was honest when he said that BuyVM list more nodes than they actually physically have them in Buffalo. This isn't something anyone would lie about.

    concerto49 said: Sure, we need to be transparent, but how far would you like to go? Should be list the bios version too?

    No, just tell me the truth when I ask you if your hosting environment is slabbed. Do you see any problem with client requesting honest answer on that?

  • @AdamJ said: Also, are you still using the VMWare/XEN trick today, yes or no? Lets >get straight to the point here.

    >

    Nope.

    Francisco, It's blatantly obvious your nodes are slabbed, am i wrong ?. Why would you deny this?

    @tragic The issue is that he keeps saying "they're not slabbed" when they are indeed!

  • @ja351 said:
    tragic The issue is that he keeps saying "they're not slabbed" when they are indeed!

    Trying to make something out of nothing..

    Thanked by 2Mark_R TriDoxiuM
  • Rockster said: No, just tell me the truth when I ask you if your hosting environment is slabbed. Do you see any problem with client requesting honest answer on that?

    I don't with that, but Fran @ BuyVM already stated the answer when asked. Problem being?

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited January 2014

    The SSD upgrades started in November of 2013 as well.

    So yes, timelines explain things.

    ATHK said: If what I said is right .. it's genius .. nothing wrong with that..

    Thanks :) I had already known it was a working solution as our SSD testbed showed it but the big project was getting 2.6.32 happy. I couldn't and had to head to Vegas to start.

    There isn't any splitting, like, 4 nodes sharing a box. Each node is made up of a single fat VM just so we get a "best of both worlds".

    When OVZ gets a 3.x kernel out that's stable we'll likely remove this layer but until then it is done with the best intentions, not for us to load nodes heavier.

    EDIT - Grammar

    Francisco

  • RocksterRockster Member
    edited January 2014

    ATHK said: Trying to make something out of nothing..

    It's always nothing when some POPULAR company is in question and yet this same crowd would burn less popular company because of smaller things. Standard double faced LEB mentality however liar is a liar no matter how popular he is.

  • jarjar Patron Provider, Top Host, Veteran

    I KNEW IT LETS TOAST THESE MOTH...

    Buying your unwanted buyvm vps for 10 cents message me

  • @ja351 said:
    tragic The issue is that he keeps saying "they're not slabbed" when they are indeed!

    I implore you to shut up. He said that, he's not saying that, he's being more open about the situation than the majority of hosts would be.

    I don't know how to put any clearer. Leave.

  • Francisco said: We try to be as transparent and honest as we can

    And those fools actually believe you. LoL :)

  • @Rockster said:
    It's always nothing when some POPULAR host is in question and yet this same crowd would burn less popular host because smaller things. Standard double faced LEB mentality.

    I think that some people just care more about the VPS performance instead of the "truth", i know that I do.

    Even if i cared about the "truth", this is really making something big out of nothing critical.

  • Okay, honest question, what difference does it make? Why do so many people look down on hosts that sell OVZ out of another VM? This is an honest question, i've never understood what the whole major issue was/is.

  • @Mark_R said:
    I think that some people just care more about the VPS performance instead of the "truth", i know that I do.

    I understand your view however those same people wouldn't show mercy to some other hosts if lies like this one would come out.

  • skybucks100 said: Why do so many people look down on hosts that sell OVZ out of another VM?

    If it's done improperly, it can reduce the performance of a node. However, if done properly, especially enabling elevator=noop, it can improve the performance threefold.

  • manacitmanacit Member
    edited January 2014

    @Rockster said:

    I understand your view however those same people wouldn't show mercy to some other hosts if lies like this one would come out.

    There's also some people who think that the moon landing is fake and the world is flat. Remind me why we're bothering to talk about what some people think?

    Ultimately this is a non-issue - no one is complaining about performance, who cares.

  • @ja351 your just trying to call someone out over some stupid sh!t. Fran answered your question and gave an explanation for why he had to do it! Sounds your just a hater cause either A) Your a provider and you just want to sh&t on them or B) Your tired of watching porn and yanking the shrimp noodle. Then I think the answer might be both A+ B.

  • MunMun Member

    Can someone please tell me why slabbing is not ok? I am really curious on what your reasoning is for it being something of so much value for hatred?

    Mun

    Thanked by 1Mark_R
  • edited January 2014

    Putting it on a hypervisor doesn't magically allow huge scam oversell deception fraud explosion abuse (did I forget some cool words?), if the VPS is performing well there is NO reason to complain.

    Now before you silly coconuts try murder @Francisco, here's another version in attempt to be comprehended by your ignorant brains: OpenVZ uses very old kernels, those kernels didn't like Fran's disk setup so he setup KVM VPS which would basically form a bridge so the older OpenVZ kernel could talk to his new disk setup (via virtio).

    Seriously, stop screaming and crying out loud because things you have 0 idea about are being mentioned about a provider "you trusted so much".

    Thanked by 2Mark_R vRozenSch00n
  • Well, this is a waste of time.

  • This whole post is so stupid .. It's "slabbed" so your SSD disks will work .. would you rather no disks? I mean come on..

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Mun said:
    Can someone please tell me why slabbing is not ok? I am really curious on what your reasoning is for it being something of so much value for hatred?

    Mun

    Because it's usually used as a way to super-oversell a node using things like sparse disks or the memory dedup option some of them allow.

    As I said, LSI's drivers would have worked but I was not comfortable using them. I had a single card onsite that I tested against and I just wasn't happy with how it handled certain actions. I tried even doing a dd_rescue with the LSI and it was spotty, where as I can usually do it on an adaptec with 75%+ success.

    Francisco

This discussion has been closed.