Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


BuyVM Miami Storage Slab - Total cluster data loss
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

BuyVM Miami Storage Slab - Total cluster data loss

Anyone else affected by this? One of the Miami storage clusters failed spectacularly and it appears most of the data is lost.

Support are very unforthcoming with information.

Not the best situation to face but would have appreciated notice, instead of finding out later on due to alerting we've setup.

Did you guys backup your slab data? How has your recovery efforts been so far?

Thanked by 2hobofl r33k
«1

Comments

  • plumbergplumberg Veteran, Megathread Squad
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    I’m sorry if that happened.

    This node had a double drive partial failure (thanks seagate). We’ve been able to recover the vast majority of data to another node (or so feedback has shown).

    Zfs helped a lot but we ran it on top of a raid card (basically the raid card did raid1 sets), but wasn’t perfect.

    We don’t use seagate anymore since this isn’t the first time we’ve had drives go pop in the night. The new nodes have all been WD and have been excellent.

    If you PM me your ticket ID I can try another pull, or maybe we can boot the old volume for you.

    Francisco

  • @Francisco said:
    I’m sorry if that happened.

    This node had a double drive partial failure (thanks seagate). We’ve been able to recover the vast majority of data to another node (or so feedback has shown).

    Zfs helped a lot but we ran it on top of a raid card (basically the raid card did raid1 sets), but wasn’t perfect.

    We don’t use seagate anymore since this isn’t the first time we’ve had drives go pop in the night. The new nodes have all been WD and have been excellent.

    If you PM me your ticket ID I can try another pull, or maybe we can boot the old volume for you.

    Francisco

    Hi @Francisco - It's okay. We've restored services thanks to external backups. I was told there was a high percentage chance that data is not recoverable and recovery efforts were 200mb/s. We chose not to wait and see as the entire slab was encrypted anyway.

    If that wasn't the case then please ensure other customers are informed properly so they can make educated decisions.

    And that an email alert would have served us very well.

    Thanked by 1hobofl
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @MrLime said:

    @Francisco said:
    I’m sorry if that happened.

    This node had a double drive partial failure (thanks seagate). We’ve been able to recover the vast majority of data to another node (or so feedback has shown).

    Zfs helped a lot but we ran it on top of a raid card (basically the raid card did raid1 sets), but wasn’t perfect.

    We don’t use seagate anymore since this isn’t the first time we’ve had drives go pop in the night. The new nodes have all been WD and have been excellent.

    If you PM me your ticket ID I can try another pull, or maybe we can boot the old volume for you.

    Francisco

    Hi @Francisco - It's okay. We've restored services thanks to external backups. I was told there was a high percentage chance that data is not recoverable and recovery efforts were 200mb/s. We chose not to wait and see as the entire slab was encrypted anyway.

    If that wasn't the case then please ensure other customers are informed properly so they can make educated decisions.

    And that an email alert would have served us very well.

    Sorry on the comms side. It’s a bad time for us to be dealing with this given we are working on the other huge projects. That’s on me.

    Akash has informed people of the transfers and people make the call. 200mb/s is lame when moving 100tb over the lan though, and sometimes it dips.

    Some users don’t care and are fine with a remake. Others have asked akash to pull and gets added to the queue.

    Francisco

    Thanked by 2MrLime hobofl
  • raindog308raindog308 Administrator, Veteran

    @Francisco said: We don’t use seagate anymore since this isn’t the first time we’ve had drives go pop in the night. The new nodes have all been WD and have been excellent.

    This is the way.

    Thanked by 1dedicados
  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited January 22

    @raindog308 said:

    @Francisco said: We don’t use seagate anymore since this isn’t the first time we’ve had drives go pop in the night. The new nodes have all been WD and have been excellent.

    This is the way.

    HGST has also been excellent. The first batch of slab nodes we built in 2016 was 200 drives and we’ve had maybe 8 or so go bad over the years?

    Either way, we’ve been doing a full top down look over everything and will likely kick off full raid background checks.

    Francisco

    Thanked by 2MrLime hobofl
  • @Francisco said:
    This node had a double drive partial failure (thanks seagate). We’ve been able to recover the vast majority of data to another node (or so feedback has shown).

    Zfs helped a lot but we ran it on top of a raid card (basically the raid card did raid1 sets), but wasn’t perfect.

    For ZFS it could be worth trying proprietary/commercial data recovery tools for those, who desperately need their files recovered.

    But that would be a separate manual process/work for each customer.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @DataRecovery said: For ZFS it could be worth trying proprietary/commercial data recovery tools for those, who desperately need their files recovered.

    But that would be a separate manual process/work for each customer.

    Let me add. This isn't any fault of ZFS. ZFS likely saved it from being a much uglier crap shoot. I'm fairly sure if we were pure ZFS (and not with a hardware raid layer) it would've been even better off (and likely could've corrected some of it).

    When this node was deployed there was some big ZFS performance regressions and I didn't want to use something like XFS.

    I wish we could do proper background checks (where it runs a full check on the whole array), but at 130TB per array, even at 1GB - 2GB/sec, that's multiple days. That eats every bit of IO available and everyones stuck with 90% iowait.

    We'll see where the rest land and work through that. I'll be kicking off the background checks for Vegas Seagate nodes later this week.

    Francisco

    Thanked by 2MrLime hobofl
  • AlexBarakovAlexBarakov Patron Provider, Veteran

    I was going to comment on HDD reliability between brands, but I remembered about Murphy.

    Good luck @Francisco !

    Thanked by 2yoursunny faleddo
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @AlexBarakov said: I was going to comment on HDD reliability between brands, but I remembered about Murphy.
    Good luck @Francisco !

    >

    Hahaha, thanks.

    I do miss HGST though. I guess they became WD, and that's now toshiba? Given all 3 use the same white drive label.

    Francisco

    Thanked by 2admax ascicode
  • raindog308raindog308 Administrator, Veteran

    @Francisco said: I wish we could do proper background checks (where it runs a full check on the whole array), but at 130TB per array, even at 1GB - 2GB/sec, that's multiple days.

    @cociu

    Thanked by 3Saragoldfarb Calin XNQ
  • tomletomle Member, LIR

    Always hurts hearing someone lost an array, hope you are recovering ok!
    Yeah ZFS wants access to the disks without any layers in between, any RAID card should me in HBA mode if possible or just a single raid array per disk.

    Thanked by 3MrLime vicaya hobofl
  • imokimok Member

    @Francisco said:

    @DataRecovery said: For ZFS it could be worth trying proprietary/commercial data recovery tools for those, who desperately need their files recovered.

    But that would be a separate manual process/work for each customer.

    Let me add. This isn't any fault of ZFS. ZFS likely saved it from being a much uglier crap shoot. I'm fairly sure if we were pure ZFS (and not with a hardware raid layer) it would've been even better off (and likely could've corrected some of it).

    When this node was deployed there was some big ZFS performance regressions and I didn't want to use something like XFS.

    I wish we could do proper background checks (where it runs a full check on the whole array), but at 130TB per array, even at 1GB - 2GB/sec, that's multiple days. That eats every bit of IO available and everyones stuck with 90% iowait.

    We'll see where the rest land and work through that. I'll be kicking off the background checks for Vegas Seagate nodes later this week.

    Francisco

    Can you share some photos of the hardware? Interesting to see an array of that size

  • DataIdeas-JoshDataIdeas-Josh Member, Patron Provider

    Things like this always reminds me. If the data is that important. to always have the data backed up elsewhere as well.

    Thanked by 1yoursunny
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @imok said: Can you share some photos of the hardware? Interesting to see an array of that size

    Supermicro 847.

    24 on the front, 12 on the back.

    Francisco

    Thanked by 1imok
  • @MrLime said:
    Anyone else affected by this? One of the Miami storage clusters failed spectacularly and it appears most of the data is lost.

    Support are very unforthcoming with information.

    I was.... and honestly... welp.. since it was not "critical" data.. oh well. snap happens...

    I got a response, its efffed up, can we give you a new slab to get it back up? And some credits?

    SURE!

    The response I got was that basically its was -100% that they would get any data back.. I guess that changed.. good for those with critical data. I was not in that... so meh...

    It is/was more important for me to get this back up and online.. than the data.. The data is rsync to local any way....

    I get it that if you were using these as a backup for critical data.. I'd be upset. So I get it.

    The slab was the most economical way to get the space I needed for the slices they are connected to.. until I rsync it to local.

    This is probably one of what 2 major major glitches I've had with BuyVM in 11 years... Yeah annoying at the time...again snap happens... and since this was not critical to me.. new slab, credits... and move on.. I can blame another glitch in a 70/30 split on me, but a little more action on the BuyVM side would have solved it. This situation, which NO, I will not elaborate on more, did hit a nerve, but I solved it myself.. and will resolve another part of the issue later.. again a good portion of this one is ON ME. Period.

    THIS IS WHY any changes to BuyVM I consider possible loss to the smooth sailing I've had.

    To those with critical data on there, I hope you get a good resolution out of this.

    For me, I am happy with the response I got, and the action and resolution. If you are expecting to get an autopsy on what happened, I would say rrhottsa rhuck.. I guess you did get one this time in another post... but 99.9999999% you better expect. It failed, data gone, we replaced your slab. Unless you got an SLA and contract etc.. Thats all you will get. And that don't matter what service, business etc. you are dealing with. My ISP's don't tell me squat. Matter of fact my own vendors don't tell me squat on some major stuff, not even in the VM/VPS etc. arena.

    I am happy! I hope others can get some good resolutions out of this! Thanks BuyVM for 11 great years!

    Thanked by 1MrLime
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @wemanageit said:

    @MrLime said:
    Anyone else affected by this? One of the Miami storage clusters failed spectacularly and it appears most of the data is lost.

    Support are very unforthcoming with information.

    I was.... and honestly... welp.. since it was not "critical" data.. oh well. snap happens...

    I got a response, its efffed up, can we give you a new slab to get it back up? And some credits?

    SURE!

    The response I got was that basically its was -100% that they would get any data back.. I guess that changed.. good for those with critical data. I was not in that... so meh...

    It is/was more important for me to get this back up and online.. than the data.. The data is rsync to local any way....

    I get it that if you were using these as a backup for critical data.. I'd be upset. So I get it.

    The slab was the most economical way to get the space I needed for the slices they are connected to.. until I rsync it to local.

    This is probably one of what 2 major major glitches I've had with BuyVM in 11 years... Yeah annoying at the time...again snap happens... and since this was not critical to me.. new slab, credits... and move on.. I can blame another glitch in a 70/30 split on me, but a little more action on the BuyVM side would have solved it. This situation, which NO, I will not elaborate on more, did hit a nerve, but I solved it myself.. and will resolve another part of the issue later.. again a good portion of this one is ON ME. Period.

    THIS IS WHY any changes to BuyVM I consider possible loss to the smooth sailing I've had.

    To those with critical data on there, I hope you get a good resolution out of this.

    For me, I am happy with the response I got, and the action and resolution. If you are expecting to get an autopsy on what happened, I would say rrhottsa rhuck.. I guess you did get one this time in another post... but 99.9999999% you better expect. It failed, data gone, we replaced your slab. Unless you got an SLA and contract etc.. Thats all you will get. And that don't matter what service, business etc. you are dealing with. My ISP's don't tell me squat. Matter of fact my own vendors don't tell me squat on some major stuff, not even in the VM/VPS etc. arena.

    I am happy! I hope others can get some good resolutions out of this! Thanks BuyVM for 11 great years!

    The guys may have offered the new drive at the start just to get people going since some people just use it as a scratch or crap like that and don’t care. If you have data you want us to look for I can do a pull and we see.

    Monitoring has always been my weakest skill set and is one I delegated completely at crane to Mike.

    For buyvm, if how cloudzy handles their end says anything, it’s that they’ll be much more proactive to such things. They’re almost obsessed with monitoring and I’ve had multiple work orders with them to replace possible problem hardware to outright shipping in whole new nodes in RMA requests.

    I’m very proud of what I built with such a small team but it’s too much to keep doing alone :)

    See you around.

    Francisco

    Thanked by 1hobofl
  • unsafetypinunsafetypin Member
    edited January 23

    @Francisco said:

    @wemanageit said:

    @MrLime said:
    Anyone else affected by this? One of the Miami storage clusters failed spectacularly and it appears most of the data is lost.

    Support are very unforthcoming with information.

    I was.... and honestly... welp.. since it was not "critical" data.. oh well. snap happens...

    I got a response, its efffed up, can we give you a new slab to get it back up? And some credits?

    SURE!

    The response I got was that basically its was -100% that they would get any data back.. I guess that changed.. good for those with critical data. I was not in that... so meh...

    It is/was more important for me to get this back up and online.. than the data.. The data is rsync to local any way....

    I get it that if you were using these as a backup for critical data.. I'd be upset. So I get it.

    The slab was the most economical way to get the space I needed for the slices they are connected to.. until I rsync it to local.

    This is probably one of what 2 major major glitches I've had with BuyVM in 11 years... Yeah annoying at the time...again snap happens... and since this was not critical to me.. new slab, credits... and move on.. I can blame another glitch in a 70/30 split on me, but a little more action on the BuyVM side would have solved it. This situation, which NO, I will not elaborate on more, did hit a nerve, but I solved it myself.. and will resolve another part of the issue later.. again a good portion of this one is ON ME. Period.

    THIS IS WHY any changes to BuyVM I consider possible loss to the smooth sailing I've had.

    To those with critical data on there, I hope you get a good resolution out of this.

    For me, I am happy with the response I got, and the action and resolution. If you are expecting to get an autopsy on what happened, I would say rrhottsa rhuck.. I guess you did get one this time in another post... but 99.9999999% you better expect. It failed, data gone, we replaced your slab. Unless you got an SLA and contract etc.. Thats all you will get. And that don't matter what service, business etc. you are dealing with. My ISP's don't tell me squat. Matter of fact my own vendors don't tell me squat on some major stuff, not even in the VM/VPS etc. arena.

    I am happy! I hope others can get some good resolutions out of this! Thanks BuyVM for 11 great years!

    The guys may have offered the new drive at the start just to get people going since some people just use it as a scratch or crap like that and don’t care. If you have data you want us to look for I can do a pull and we see.

    Monitoring has always been my weakest skill set and is one I delegated completely at crane to Mike.

    For buyvm, if how cloudzy handles their end says anything, it’s that they’ll be much more proactive to such things. They’re almost obsessed with monitoring and I’ve had multiple work orders with them to replace possible problem hardware to outright shipping in whole new nodes in RMA requests.

    I’m very proud of what I built with such a small team but it’s too much to keep doing alone :)

    See you around.

    Francisco

    So realistically what stopped you from hiring management and a larger team and becoming a larger enterprise that you would lightly oversee other than high lump sum of sale and washing your hands with complete oversight?

    Just curious why it wouldn't make sense to grow rather than sell out entirely, thinking of collecting and reaping future profits as the business would grow as sort of an owner with employees who handle it mentality. If not comfortable answering I'm cool with a fuck off but I like to understand the business side of this because I saw huge independent growth potential if desiring and could afford hiring staff to take your workload and decision making off your hands.

    TLDR what are they doing that you couldnt have hired people to do for buyvm/frantech or was it simply to gather a lump sum and exit managing?

    Thanked by 1hobofl
  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited January 23

    @unsafetypin said: So realistically what stopped you from hiring management and a larger team and becoming a larger enterprise that you would lightly oversee other than high lump sum of sale and washing your hands with complete oversight?

    Pure stubbornness.

    It's also quite hard to find competent workers. We've been through many and only over the last few years have some good guys on support. Many hosts mask this by having a lot of guys working for them (1000 monkeys typing on 1000 keyboards kinda thing). In the end I chose to just do the work myself than having to baby sit.

    Server building was always on me and putting it out to someone else was either heavily back logged (remember, most of our ryzen builds were during COVID) or they wouldn't ship to LUX for instance.

    I could've tried to do that now but I still feel i'd be getting pulled back into BuyVM side too much and crane requires as much of my attention as possible given all the crap ICANN and the registries require. It's fun/interesting but the definition of red tape.

    Ninja addition - And quite simply, the VPS side doesn't interest me as much. It started to feel like "work" and that's when I knew it was time. I talked to a few other hosts that sold over the years and they had a similar feel.

    Francisco

  • unsafetypinunsafetypin Member
    edited January 23

    @Francisco said:

    @unsafetypin said: So realistically what stopped you from hiring management and a larger team and becoming a larger enterprise that you would lightly oversee other than high lump sum of sale and washing your hands with complete oversight?

    Pure stubbornness.

    It's also quite hard to find competent workers. We've been through many and only over the last few years have some good guys on support. Many hosts mask this by having a lot of guys working for them (1000 monkeys typing on 1000 keyboards kinda thing). In the end I chose to just do the work myself than having to baby sit.

    Server building was always on me and putting it out to someone else was either heavily back logged (remember, most of our ryzen builds were during COVID) or they wouldn't ship to LUX for instance.

    I could've tried to do that now but I still feel i'd be getting pulled back into BuyVM side too much and crane requires as much of my attention as possible given all the crap ICANN and the registries require. It's fun/interesting but the definition of red tape.

    Ninja addition - And quite simply, the VPS side doesn't interest me as much. It started to feel like "work" and that's when I knew it was time. I talked to a few other hosts that sold over the years and they had a similar feel.

    Francisco

    Thanks for the explanation. This is pretty awesome to have some clarity on. The bummer for me is that you allegedly cannot sell VPS on Namecrane for some time (not that you would want to, I assume).

    I find myself to take a long time to trust providers with uptime and respect to data privacy and such and felt you personally and Jar from MXroute, as much as someone can view someones actions on the internet, very trustworthy and a single point of truth to these companies. I'm personally unsure about the change in ownership for now with more hands being involved.

    Thanked by 1hobofl
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @unsafetypin i wouldn’t want to compete with buyvm anyway and would rather people keep using that :) if crane ever did some sort of cloud thing it would be on a “highly managed” with all the bells and whistles, and price, to go with it.

    I lose no sleep over how Hannan and his guys will run things. And I’ll keep guiding to make sure it stays in check. We have a meeting tomorrow to discuss all the quotes in CH, and probably a new location for cloudzy.

    Francisco

  • FairShareFairShare Member
    edited January 23

    @Francisco said:
    if crane ever did some sort of cloud thing it would be on a “highly managed” with all the bells and whistles, and price, to go with it.

    >
    Would love to see High Availability Shared Hosting, complete with failover/geo-redundancy .

    Thanked by 1MrLime
  • ralfralf Member

    "Something something cluster fsck something." I've never been good with jokes.

    Failed joke attempt out of the way, I hope everyone manages to recover/rebuild everything without significant loss.

    Thanked by 1Saragoldfarb
  • SirFoxySirFoxy Member
    edited January 23

    @Francisco said:
    @unsafetypin i wouldn’t want to compete with buyvm anyway and would rather people keep using that :) if crane ever did some sort of cloud thing it would be on a “highly managed” with all the bells and whistles, and price, to go with it.

    I lose no sleep over how Hannan and his guys will run things. And I’ll keep guiding to make sure it stays in check. We have a meeting tomorrow to discuss all the quotes in CH, and probably a new location for cloudzy.

    Francisco

    😂 you literally just can’t stop fucking lying. You mean you had to sign a four year non-compete as usual when you sell your company.

    Btw, BuyVM was sold in May of 2023 not January 2025, IPs and such remain under FEDcisco’s name during the duration because the goal was and is to give the impression FEDcisco is still involved, when it’s only in an absolutely minimal and temporary role to keep people buying their shit services.

    P.S. Not the first BuyVM data loss, it’s for hobbyist skids, not real businesses.

  • @SirFoxy said:

    @Francisco said:
    @unsafetypin i wouldn’t want to compete with buyvm anyway and would rather people keep using that :) if crane ever did some sort of cloud thing it would be on a “highly managed” with all the bells and whistles, and price, to go with it.

    I lose no sleep over how Hannan and his guys will run things. And I’ll keep guiding to make sure it stays in check. We have a meeting tomorrow to discuss all the quotes in CH, and probably a new location for cloudzy.

    Francisco

    😂 you literally just can’t stop fucking lying. You mean you had to sign a four year non-compete as usual when you sell your company.

    Btw, BuyVM was sold in May of 2023 not January 2025, IPs and such remain under FEDcisco’s name during the duration because the goal was and is to give the impression FEDcisco is still involved, when it’s only in an absolutely minimal and temporary role to keep people buying their shit services.

    P.S. Not the first BuyVM data loss, it’s for hobbyist skids, not real businesses.

    welcome back, I'm glad to see they unbanned you

  • ralfralf Member

    @SirFoxy said:
    😂 you literally just can’t stop fucking lying. You mean you had to sign a four year non-compete as usual when you sell your company.

    Btw, BuyVM was sold in May of 2023 not January 2025, IPs and such remain under FEDcisco’s name during the duration because the goal was and is to give the impression FEDcisco is still involved, when it’s only in an absolutely minimal and temporary role to keep people buying their shit services.

    Even if you're right, it's not really any of our business what their terms were.

    FWIW, in the UK I've worked at places where previous directors signed 2 or 3 year agreements to stay at the company. 4 years sounds unusually long as most new owner want to exert their vision on the company sooner than that.

  • @SirFoxy said:

    @Francisco said:
    @unsafetypin i wouldn’t want to compete with buyvm anyway and would rather people keep using that :) if crane ever did some sort of cloud thing it would be on a “highly managed” with all the bells and whistles, and price, to go with it.

    I lose no sleep over how Hannan and his guys will run things. And I’ll keep guiding to make sure it stays in check. We have a meeting tomorrow to discuss all the quotes in CH, and probably a new location for cloudzy.

    Francisco

    😂 you literally just can’t stop fucking lying. You mean you had to sign a four year non-compete as usual when you sell your company.

    Btw, BuyVM was sold in May of 2023 not January 2025, IPs and such remain under FEDcisco’s name during the duration because the goal was and is to give the impression FEDcisco is still involved, when it’s only in an absolutely minimal and temporary role to keep people buying their shit services.

    P.S. Not the first BuyVM data loss, it’s for hobbyist skids, not real businesses.

    You sure have the hots for @Francisco, whats your end game with all this. I don't think anyone gives a fuck about what your saying.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    You sure have the hots for @Francisco, whats your end game with all this. I don't think anyone gives a fuck about what your saying.
    >

    Just ignore him.

    Yes, there's a no compete, as is with almost any webhost sale (especially if the owner isn't outright retiring). Even with that, why would I sell BuyVM just to go back into the same market/price bracket I just left? I don't have enough spare IP's to go selling cheapo VM's anyway.

    @FairShare said: Would love to see High Availability Shared Hosting, complete with failover/geo-redundancy .

    This is off topic but I'll answer it here anyway.

    HA for shared is messy. There's that new control panel Enhance that I think can do it since everything is containers, but it's still very young. Doing HA for cPanel or DA isn't what you think either. At most a host will put it on a SAN inside of some VM's and then they can failover the VM to a 2nd node if the 1st has a hard down fault. That doesn't help with software blowing up due to a bad configuration getting generated or, say, Imunify pushing out a bad update to everyone.

    Francisco

    Thanked by 2hobofl FairShare
  • SirFoxySirFoxy Member
    edited January 23

    @ralf said:

    @SirFoxy said:
    😂 you literally just can’t stop fucking lying. You mean you had to sign a four year non-compete as usual when you sell your company.

    Btw, BuyVM was sold in May of 2023 not January 2025, IPs and such remain under FEDcisco’s name during the duration because the goal was and is to give the impression FEDcisco is still involved, when it’s only in an absolutely minimal and temporary role to keep people buying their shit services.

    Even if you're right, it's not really any of our business what their terms were.

    FWIW, in the UK I've worked at places where previous directors signed 2 or 3 year agreements to stay at the company. 4 years sounds unusually long as most new owner want to exert their vision on the company sooner than that.

    What I’m saying is factual, always has been. FEDcisco (an actual previous federal informant selling “freedom of speech” and “law of the land” hosting btw) is just another grifter out to make money.

    @acidpuke said:

    @SirFoxy said:

    @Francisco said:
    @unsafetypin i wouldn’t want to compete with buyvm anyway and would rather people keep using that :) if crane ever did some sort of cloud thing it would be on a “highly managed” with all the bells and whistles, and price, to go with it.

    I lose no sleep over how Hannan and his guys will run things. And I’ll keep guiding to make sure it stays in check. We have a meeting tomorrow to discuss all the quotes in CH, and probably a new location for cloudzy.

    Francisco

    😂 you literally just can’t stop fucking lying. You mean you had to sign a four year non-compete as usual when you sell your company.

    Btw, BuyVM was sold in May of 2023 not January 2025, IPs and such remain under FEDcisco’s name during the duration because the goal was and is to give the impression FEDcisco is still involved, when it’s only in an absolutely minimal and temporary role to keep people buying their shit services.

    P.S. Not the first BuyVM data loss, it’s for hobbyist skids, not real businesses.

    You sure have the hots for @Francisco, whats your end game with all this. I don't think anyone gives a fuck about what your saying.

    I dislike FEDcisco out of personal convictions, yes. He’s a snake. The fact of the matter is FEDcisco has been lying and spinning webs for more than a year. But yes, I agree. He will continue to make money here. It’s not like most members of this forum do a lot of due diligence anyway, and it’s quite an echo chamber.

  • acidpukeacidpuke Member
    edited January 23

    Whats your quoting doesn't look like anything other then a personal conversation you had with him, you said "he is a smart man" obviously he is .. he sold his business and made some money what does it matter to you.

Sign In or Register to comment.