Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Help Needed! Proxmox + KVM Guest; Randomly hangs without any reason, no console, watchdog useless
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Help Needed! Proxmox + KVM Guest; Randomly hangs without any reason, no console, watchdog useless

PulsedMediaPulsedMedia Member, Patron Provider

Since the emergency migrations fall of 2022 we moved to Proxmox + Bcached RAID arrays passed through to the VM Guests as blockdevices.

Now, more than 2 decades of VM experience; No one in our team has ever seen anything like this.

VMs simply halt, CPU usage stays constant at certain number.
Console will not load, "no output" typically either, if it does, not responsive and no error messages of any meaning. Reset will not fix this, has to be complete stop and then start.

At one point we thought this is lack of ram, at another array failing. Adding ram no help, drives check out fine, cache drives perfectly functional.
It's typically just 1VM on the host, sometimes a couple. Might have very loose correlation to system load, not quite certain.

We tried changing host CPU types, adding or removing RAM, security mitigations as of late on or off, VE7 and VE8, no matter is it old Xeon E5v1 or Newer EPyC or Ryzen, STILL HAPPENS.

WORST OF ALL They halt in a manner that watchdog device keeps getting pinged, and the VM does not automaticly reboot. Thought many times we must've forgotten to set it up, or incorrectly configured. But nope, tested with manually halting the cpu via magic sysrq and watchdog reboots perfectly. So the VM CPU is not completely halted, just enough running for watchdog device to get it's pings (or now i realize, it probably tries just to rest the vCPU which doesn't work...) OR the whole Qemu instance is crashed.

Everyone of these will have flat CPU usage at some level, sometimes 100%, sometimes anything between ~25% to 100% - it's absolutely flat without any deviation.

NOTHING ON ANY LOG, There is zero errors on any logs, all hardware tests good, generally plenty of resources free.

I am suspicious the culprit is Bcache, it's badly maintained, riddled with bugs (esp management wise).

Most frequently this happens to same VM guests, but not completely so, but there is definitively a few which frequently has this, yet everything tests out fine. Perhaps tied to a particular usage pattern? Not sure.
Does not happen to all VMs, ~90% of VMs are unaffected by this. As stable as can be.

Before we start ripping out all bcaches just as a hunch that might be the reason, migrating users en masse etc; Does the community know of this kind of issue, which might cause this?
This is absolutely infuriating, and wasted a lot of time already, service cancellations etc.

This has to be solved, and if the solution is "throw hardware at it", then that shall be it - but before we do that and annoy all the users on all the VMs, we must try everything else first.

(We could just setup monitoring to send stop + start commands but ... that's a hack and won't solve root cause)

«13

Comments

  • PulsedMediaPulsedMedia Member, Patron Provider

    Oh and a reward for anyone who can solve this without throwing hardware at it // without causing massive amounts of work, in the form of service credit.

    Thanked by 1host_c
  • edited January 24

    It's very much an absolute stab in the dark but maybe try strace on qemu. Maybe it hangs inside a syscall or at least those give some kind of hint as to in what direction to look into it further.

    Thanked by 2tmntwitw mailcheap
  • PulsedMediaPulsedMedia Member, Patron Provider

    @totally_not_banned said:
    It's very much an absolute stab in the dark but maybe try strace on qemu. Maybe it hangs inside a syscall or at least those give some kind of hint as to in what direction to look into it further.

    hmm... worth a try probably. The random nature of this doesn't help much in debugging tho :(

  • amarcamarc Veteran

    What hardware ? What kernel ?

    Did you try this:

    https://lowendtalk.com/discussion/comment/3650040#Comment_3650040

  • @PulsedMedia said:

    @totally_not_banned said:
    It's very much an absolute stab in the dark but maybe try strace on qemu. Maybe it hangs inside a syscall or at least those give some kind of hint as to in what direction to look into it further.

    hmm... worth a try probably. The random nature of this doesn't help much in debugging tho :(

    Yeah, not being able to reliably reproduce a bug is pretty much the worst that can happen with debugging.

  • Perhaps may be worth a shot to dump the entire memory of the qemu process which have locked up. Analyzing the crash dump with the debug symbols may reveal some insights as to where the qemu process threads may have locked up.

  • ehabehab Member

    its a sign to sell the business.

    Thanked by 1Erisa
  • PulsedMediaPulsedMedia Member, Patron Provider

    @amarc said:
    What hardware ? What kernel ?

    Did you try this:

    https://lowendtalk.com/discussion/comment/3650040#Comment_3650040

    Worth a try! Have not tried all of those particular kernel parameters.
    Implemented this to a bunch of guests now, the most frequent crashers.

    @totally_not_banned said:

    @PulsedMedia said:

    @totally_not_banned said:
    It's very much an absolute stab in the dark but maybe try strace on qemu. Maybe it hangs inside a syscall or at least those give some kind of hint as to in what direction to look into it further.

    hmm... worth a try probably. The random nature of this doesn't help much in debugging tho :(

    Yeah, not being able to reliably reproduce a bug is pretty much the worst that can happen with debugging.

    Indeed it is, and those are what typically lingers for a long time and takes a s**tload of your time.

    One server model we had unspecified frequent halts too, but most commonly; BAD PERFORMANCE, solved by a simple reboot. Nothing seemed to solve it. 1½ years of hardship with them.

    Until one day a chipset heatsink stud snapped, or just randomly thought might it be chipset temps. Took the worst nodes, all the studs snapped, paste rock hard and difficult to remove so tried out that you had to wait for IPA to melt it (then realized plastic scraper could get 95% off first!), hot glued the heatsink back in, added a 40mm fan to it.

    Then it was waiting game, maybe 1 months of random nodes stil lcrashing occasionally, but not the few worst ones anymore.

    Then came the crunchtime, mid summer, ~all of the nodes need this maintenance. Every shop in Finland; No 40mm fans, only a few at cost of 15-25€ a pop (WTF?!?).

    needed hundreds of 40mm fans, like by yesterday lol

    But yeah it took 1½ years to figure it out at; Turns out the reason was multiple fold:

    • BIOS BUG; Chipset thermal sensor reported 20 or 25C lower temps than actual temps.
    • Dell Datacenter Services went cheapskate! MOBO MFG has specific full copper heatsink, as tall as fits. DCS had used aluminium and so low height that could fit full 20mm height 40mm fan on top of it, and still have enough room for the intake.

    Now they are our most stable servers still to date, at a ripe young age of 14 years! (MFG 08-09/09 !)
    They are still snappy and fast, but because they are that old and people want newer we have to refurb them, despite still being perfectly good, sufficient performance ,low power usage (60W each with 4x 3.5" HDD!). Makes me sad. They were really good workhorses, and still are. But getting flak for ~6 years already for using "old outdated slow opterons", yet servers show typically 5% CPU load ...

    Best of all, we only paid ~110-120$ each for them a decade ago, and some even came with ram :D We bought 4-6 pallets of them, forgot how many. Many racks worth.

    Parts of them still live on; those rack rails are universal and will live on, same with PSUs, and even many of the Delta fans in them, some heatsinks get repurposed, and i might even try to hack in the chassis a standard motherboard one of these days if we the backplanes can passthrough SATA3 reliably (yea they were SATA2 too! But doesn't matter with HDDs for mostly Random I/O workload)

  • PulsedMediaPulsedMedia Member, Patron Provider

    @ehab said:
    its a sign to sell the business.

    Sure thing, got a few millies to spare so we can start negotiations? ;)
    also let's make such a deal there's no anticompete clause so i can just start another hosting business after a long vacation ;)

    Thanked by 2ehab bdl
  • @PulsedMedia said:

    No 40mm fans, only a few at cost of 15-25€ a pop (WTF?!?).

    Yeah, good fans tend to be often times stupidly expensive (not like 15-25€ but still...). If one doesn't need a whole lot it's sometimes cheaper to just get some old server and rip that apart than to actually buy the fans, lol.

  • PulsedMediaPulsedMedia Member, Patron Provider
    edited January 24

    @totally_not_banned said:

    @PulsedMedia said:

    No 40mm fans, only a few at cost of 15-25€ a pop (WTF?!?).

    Yeah, good fans tend to be often times stupidly expensive (not like 15-25€ but still...). If one doesn't need a whole lot it's sometimes cheaper to just get some old server and rip that apart than to actually buy the fans, lol.

    they weren't even good ones, they were standard of the mill entry level :D It's just finnish shenanigans with our taxation, no one holds any stock because you have to pay tax on inventory.

    You can get brand new top of the line delta fans meant for 100% 24/7/365 operation for decades for that price, instead of entry level gaming pc fans.

    We've been buying up a lot of used 120mm Delta fans, those are expensive even when used, 15-25€ a pop at cheapest, typically 20-25€. Bought some old cisco router fan trays to take the fans out too.

    Good thing with Delta is -> Those damn fans are probably eternal and will outlive even cockroaches after nuclear cataclysm. I don't think i've seen a single industrial delta fan fail (they do make more entry level thin ones for small industrial devices, not enough experience to say anything about them).

    It's bonkers to think that Delta can make that robust, long lived fans, while the average fan is almost just as complicated to build, same mfg methods etc. and might only live for 6months.

  • v3ngv3ng Member, Patron Provider

    Do you happen to be using Qemu Guest agent on the affected VMs?
    Some OS do not handle the fs-freeze properly that is issued when performing a backup and crash or freeze

    Thanked by 2ehab tmntwitw
  • PulsedMediaPulsedMedia Member, Patron Provider

    @v3ng said:
    Do you happen to be using Qemu Guest agent on the affected VMs?
    Some OS do not handle the fs-freeze properly that is issued when performing a backup and crash or freeze

    Yes, all are Debian 10 guests.
    We do the final setup (automated VM creation) via the guest agent. It's a bit unreliable for anything else tho.

  • edited January 24

    @PulsedMedia said:
    We've been buying up a lot of used 120mm Delta fans, those are expensive even when used, 15-25€ a pop at cheapest, typically 20-25€. Bought some old cisco router fan trays to take the fans out too.

    Really? That's kind of harsh. Well, i guess i probably have the advantage of only randomly needing very small amounts comparatively, so i tend to gobble up what ends up on eBay for cheap. I'm basically buying 90% Delta and the occasional AVC or Nik-forgot-their-name(?). The major problem is that people locally selling them like really cheap (around ~2-5€) usually only have 1 or a handful at best (there's a couple good sellers with bigger stocks too but they regularly run out...) making it pretty stupid to pay 5€ in shipping. The stronger ones (like ~1.5A) are obviously way rarer and if i see a cheap offer it probably won't be able to be seen for much longer ;)

    My recommendation would be to check international eBay offers (aka those on other eBay branches - checking international on local eBay does not even display half of what you could theoretically buy). It's crazy how often you can get really good deals despite paying a ton in shipping. Jeez, some stuff is so cheap overseas that even after paying like 20-30€ in shipping and local VAT on it you still very much come out on top and eBay's global shipping program makes the process braindead easy (you don't have to deal with anything - you simply order and all duties are automatically collected by eBay during payment).

  • jackbjackb Member, Host Rep
    edited January 24

    Are spinning disks involved? If so, what's the current error recovery timeout? E.g.

    smartctl -l scterc /dev/sda

    I've seen such weird and wonderful behaviour in the past where there was no timeout specified and the disk hit a pending sector.

    Another possibility is the cache devices are struggling. For example, if you use a whole (worse with consumer) SSD without manual over provisioning, writes to the SSD can crawl to almost a halt. I haven't used bcache, but I imagine you should be able to inspect the write queue size somewhere (which would be growing). However, since it only impacts one VM at a time I don't think it is that one.

  • yoursunnyyoursunny Member, IPv6 Advocate

    It's a curse inflicted by RFC8200 gods.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @jackb said:
    Are spinning disks involved? If so, what's the current error recovery timeout? E.g.

    smartctl -l scterc /dev/sda

    I've seen such weird and wonderful behaviour in the past where there was no timeout specified and the disk hit a pending sector.

    Another possibility is the cache devices are struggling. For example, if you use a whole (worse with consumer) SSD without manual over provisioning, writes to the SSD can crawl to almost a halt. I haven't used bcache, but I imagine you should be able to inspect the write queue size somewhere (which would be growing). However, since it only impacts one VM at a time I don't think it is that one.

    drives test fine, and cache is not bottlenecking either.

    @yoursunny said:
    It's a curse inflicted by RFC8200 gods.

    Ah ofc you needed to get some IPv6 Zealotry in there, how else! :)
    Shall we also install backdoors at the same time for your CCP "friends", so they can arrest our customers easier? :)

  • mailcheapmailcheap Member, Host Rep

    @PulsedMedia said:
    I am suspicious the culprit is Bcache, it's badly maintained, riddled with bugs (esp management wise).

    Haven't used bcache, but its sister project bcachefs was recently mainlined into kernel 6.7, and I remember how the developer Kent was saying it had solid foundation because of the number of years of bcache being battle-tested in production.

    Never would've thought it was badly maintained or even regularly updated for that matter these days as Kent is focused on his new FS and bcache is considered feature complete and stable.

    If you want similar or better performance, ZFS on Linux handles everything much better IMHO. No more mdraid + bcache when you can get all that done with just ZFS. Not to mention you get good compression ratios with large files when using zstd without much CPU impact.

    As for figuring out what's causing the issue, as others have suggested strace is as good a place as any to start. I would also increase the log debug levels, make sure the logs are regularly flushed to disk (remove the hyphen in rsyslog for example), check the core dump, configure the VM to kdump, etc.

    Pavin.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @mailcheap said: ZFS on Linux handles everything much better IMHO

    Should i file for bankruptcy now or a week after implementing ZFS on everything? ;)

    ZFS is the bar none worst "filesystem" out there, it's on it's own class of badness.

    Yea there is hype around it, but if you start asking people, you are hard pressed to find a single person who's not had all their data nuked due to ZFS.

    We've had a perfect 100% track record with ZFS tho, and this is not 1 or 2 setups, but like 50+. Perfect 100% track record of data getting nuked. ;)

    Never mind the abysmally low performance by design, never seen anything that bad. It's built for SINGLE stream sequential. That's it's only useful workload, and only for ephemeral data you don't mind loosing.

    ZFS is also perfect on what it was built for -> Maximize cost of operation, so enterprises can sell more hardware and support contracts ;)

    @mailcheap said: Not to mention you get good compression ratios with large files when using zstd without much CPU impact.

    Compressed data cannot be compressed, and the little compression vs. added latency (just from compression) is irrelevant, worse service for same amount of money. It's quite good for regular ol' desktop system usage tho, there you start to see gains. Until your data is nuked.

    @mailcheap said: solid foundation because of the number of years of bcache being battle-tested in production.

    Well, considering how it even lacks working UUID mapping ... ;)
    There's a bunch of issues, and we even had one bcache caused data corruption event so far.

    It definitively is not "as rock solid" as many would make you believe.

    We suspect these crashes are caused by bcache. Setting up otherwise identical nodes but without bcache (well one might get lvmcache)

    @mailcheap said: check the core dump, configure the VM to kdump

    no crash == no core dump.
    These halt, as described.

    @mailcheap said: make sure the logs are regularly flushed to disk

    Can't write to drive when it completely halts ;)

    Thanked by 1quicksilver03
  • mailcheapmailcheap Member, Host Rep

    @PulsedMedia said:
    Should i file for bankruptcy now or a week after implementing ZFS on everything? ;)

    ZFS is the bar none worst "filesystem" out there, it's on it's own class of badness.

    Yea there is hype around it, but if you start asking people, you are hard pressed to find a single person who's not had all their data nuked due to ZFS.

    We've had a perfect 100% track record with ZFS tho, and this is not 1 or 2 setups, but like 50+. Perfect 100% track record of data getting nuked. ;)

    We'll have to agree to disagree on ZFS, it has been rock solid for us these past few years. I still miss using it (simplicity and performance) as our primary datastore (moved to ceph) but we still use it for our backups.

    I was honestly taken aback hearing this about ZFS 🤯, if you were to say this on any ZFS forum they would assume you mistook btrfs for zfs :wink: And if you say btrfs is even worse than ZFS, I'd kindly refer you to Facebook's bazillion servers running on btrfs. It all depends who you're asking I guess!

    Never mind the abysmally low performance by design, never seen anything that bad. It's built for SINGLE stream sequential. That's it's only useful workload, and only for ephemeral data you don't mind loosing.

    It's the most performant FS I've ever used, bar none. Our use case (emails in maildir format) is typically the worst on filesystems, close to a hundred million small files (50-100 KB average) and ZFS was the fastest for it. Of course this is contingent on the correct configuration (multi tier cache using RAM + NVMe SSD, NVMe for WAL, other optimizations) and hardware.

    Ars' Jim Salter has a bunch of articles on ZFS, where I got my start on this amazing FS.

    ZFS is also perfect on what it was built for -> Maximize cost of operation, so enterprises can sell more hardware and support contracts ;)

    Cries in ceph 🥲

    We suspect these crashes are caused by bcache. Setting up otherwise identical nodes but without bcache (well one might get lvmcache)

    Hope you figure out what's causing the issue. It sure isn't easy when the problem is not reproducible and happens randomly.

    no crash == no core dump.
    These halt, as described.
    Can't write to drive when it completely halts ;)

    Sounds like a crash, but then I'm not an expert on VMs 🫠
    With my limited experience and what you've described, it looks like the VM's kernel is panicking.

    Pavin.

  • lowenduser1lowenduser1 Member
    edited January 25

    VMs simply halt, CPU usage stays constant at certain number.
    Console will not load, "no output" typically either, if it does, not responsive and no error messages of any meaning. Reset will not fix this, has to be complete stop and then start.

    VM disks can get into a race condition with the cache options (unsafe writethrough and all that) where the VM gets 'OK' from FS and waits for dirty memory to be flushed but for some reason that doesn't happen in a timely fasion (bcache indeed?). With VM hosts with large memory the default dirty sysctl might simply be too large - or you could attempt to switch to a cache / mount options that wait for more certain flushes.

  • angstromangstrom Moderator

    @PulsedMedia said:

    @yoursunny said:
    It's a curse inflicted by RFC8200 gods.

    Ah ofc you needed to get some IPv6 Zealotry in there, how else! :)
    Shall we also install backdoors at the same time for your CCP "friends", so they can arrest our customers easier? :)

    Please abstain from cheap CCP rhetoric -- that is, from cheap CCP unwarranted accusations -- in the future

    Thanked by 1iKeyZ
  • @PulsedMedia
    What you said about zfs is quite surprising.

    We've had a perfect 100% track record with ZFS tho, and this is not 1 or 2 setups, but like 50+. Perfect 100% track record of data getting nuked. ;)

    You meet the necessary criteria. Enough ram and "enterprise class disks"?

    Yea there is hype around it, but if you start asking people, you are hard pressed to find a single person who's not had all their data nuked due to ZFS

    I think this paragraph would make a great survey. Experienced people can share the results.

    Because you scared me :)

  • PulsedMediaPulsedMedia Member, Patron Provider

    @lowenduser1 said:

    VMs simply halt, CPU usage stays constant at certain number.
    Console will not load, "no output" typically either, if it does, not responsive and no error messages of any meaning. Reset will not fix this, has to be complete stop and then start.

    VM disks can get into a race condition with the cache options (unsafe writethrough and all that) where the VM gets 'OK' from FS and waits for dirty memory to be flushed but for some reason that doesn't happen in a timely fasion (bcache indeed?). With VM hosts with large memory the default dirty sysctl might simply be too large - or you could attempt to switch to a cache / mount options that wait for more certain flushes.

    Interesting! There could be something there.

    @angstrom said:

    @PulsedMedia said:

    @yoursunny said:
    It's a curse inflicted by RFC8200 gods.

    Ah ofc you needed to get some IPv6 Zealotry in there, how else! :)
    Shall we also install backdoors at the same time for your CCP "friends", so they can arrest our customers easier? :)

    Please abstain from cheap CCP rhetoric -- that is, from cheap CCP unwarranted accusations -- in the future

    He has self admitted he wants it easier for Chinese Police to monitor, censor and arrest people. This is the CCP party line. This is the root for all of his IPv6 zealotry, he wants pervasive monitoring of each individual.
    Therefore, those jokes, puns, rhetoric are 100% warranted.

    Or should we conclude this forum is anti-freedom of speech, pro censorship, pro authoritarian control?

    Why don't you ever intervene when he harasses other people? Or am i correct, and i've not seen where you have intervened when he harasses others, constantly, and pervasively?

    Thanked by 1sillycat
  • angstromangstrom Moderator

    @PulsedMedia said:

    @angstrom said:

    @PulsedMedia said:

    @yoursunny said:
    It's a curse inflicted by RFC8200 gods.

    Ah ofc you needed to get some IPv6 Zealotry in there, how else! :)
    Shall we also install backdoors at the same time for your CCP "friends", so they can arrest our customers easier? :)

    Please abstain from cheap CCP rhetoric -- that is, from cheap CCP unwarranted accusations -- in the future

    He has self admitted he wants it easier for Chinese Police to monitor, censor and arrest people. This is the CCP party line. This is the root for all of his IPv6 zealotry, he wants pervasive monitoring of each individual.
    Therefore, those jokes, puns, rhetoric are 100% warranted.

    Where has he said this?

    Why don't you ever intervene when he harasses other people? Or am i correct, and i've not seen where you have intervened when he harasses others, constantly, and pervasively?

    I've called him out a couple of times in the past, but not because of IPv6. The fact that he maintains a list of which providers offer IPv6 and which do not is not harrassment

    Thanked by 1yoursunny
  • PulsedMediaPulsedMedia Member, Patron Provider

    @tra10000 said:
    @PulsedMedia
    What you said about zfs is quite surprising.

    We've had a perfect 100% track record with ZFS tho, and this is not 1 or 2 setups, but like 50+. Perfect 100% track record of data getting nuked. ;)

    You meet the necessary criteria. Enough ram and "enterprise class disks"?

    Yea there is hype around it, but if you start asking people, you are hard pressed to find a single person who's not had all their data nuked due to ZFS

    I think this paragraph would make a great survey. Experienced people can share the results.

    Because you scared me :)

    The same systems work without any flaw with any other FS. That's sufficient metric to conclude ZFS weakens reliability, in comparison to other options.

    Btw, SAS Enterprise drives are requirement for ZFS, then it's a poor filesystem.

    We typically use setups like 12 drives with 144GB RAM, or 4 drives with 48GB RAM.
    Are you saying 48GB for just 4x8TB drives is not sufficient? ;)

    Also if ZFS requires huge amounts of RAM to be functional, safe to use filesystem; Then it has failed. None other has this requirement.
    You can run a 12x MD Raid + Ext4 on 1GB of RAM if you want.


    Survey; People don't like to advertise the fact their beloved ZFS has failed, but they let it slip sometimes. Well known case example; Jake from LTT is as obvious ZFS zealot as they come; But even he let it slip on one of their videos ZFS has nuked LTT data, i understood multiple times, and config issues on the same video.

    Also; It's so easy to just claim "faulty hardware" when ZFS is adamant on generating false error counters and huge PR on how it supposedly prevents data corruption others cannot (Both of the 2 points are false btw!)

    I might have confirmation bias tho

    Thanked by 2sillycat host_c
  • LeviLevi Member

    If issue is business braking - pay for proxmox support request. They are decent.

    Thanked by 1mailcheap
  • PulsedMediaPulsedMedia Member, Patron Provider

    @angstrom said: Where has he said this?

    Some of his very old posts, and it's been brought up lately too.

    The post was about class room someone making a post online which they should not (according to CCP), and that they have to arrest the whole class room instead of single individual because they could not track the person behind NAT.

    @angstrom said: I've called him out a couple of times in the past, but not because of IPv6. The fact that he maintains a list of which providers offer IPv6 and which do not is not harrassment

    and that list is not what i was talking about.
    He has harassed providers to give him free or near free services, so that he'd stop the harassment. This has been privately disclosed from another provider, that this happens. They just don't want his negativity, so they caved in to the extortion.

    This is very typical tactic, we've had many such demands, give me free service OR i'll post bad things about you.

    This happened with Yoursunny too, he couldn't get under cost service (18€/year for 1-2TB storage 1Gbps seedbox) renewed at that price despite his demands (energy crisis), so he has been harassing us everywhere he can, trying to cause as much negative PR as possible. He would have us rather involucrate than give up this 18€/year special deal.

    He lied about it too, as if we'd had cancelled it or demanded retroactively to pay more; Where we simply concluded come renewal time everyone has the choice to either renew at the new higher price, or cancel their service; we could not provide the service at the special price anymore thanks to inflation + energy crisis.

    Having to raise prices especially for the lowest cost dramatically in relation more, was not nice admittedly, but the choice was that either close the doors completely for the next 6-36months, closing everyone's service; Or increase prices at renewal time.

    If we don't keep calling his bullshit out, he will continue harassing us. Only after we (and some others) started calling him out (with jokes, puns etc. mind you!) has he stopped most of it, or at least the most outrageous lying.

    Mind you, i find it funny most of the time tho. Get to make so many pushup images! ;D

    @Levi said:
    If issue is business braking - pay for proxmox support request. They are decent.

    Good point, they might get annoyed tho i am looking for one time solution, to encompass a lot of servers and paying only one.

    Thanked by 1sillycat
  • yoursunnyyoursunny Member, IPv6 Advocate

    @PulsedMedia said:
    He has harassed providers to give him free or near free services, so that he'd stop the harassment. This has been privately disclosed from another provider, that this happens. They just don't want his negativity, so they caved in to the extortion.

    Lies lies lies.

    We have the following free compute services at the moment:

    • box1, Evolution, in exchange for frontpage link.
    • box2, NanoKVM, applied per our qualifications.
    • box3, Hostnamaste, raffle prize for one year.
    • box4, Limitless, paid first year, raffle prize for one year applied as renewal.
    • box5, VirmAche, it was a dare that if they cannot proceed 1000 tickets on a certain day, 20 people would receive NVMe2G for life.
    • box6, Hosteriod, a copy of box5 as DediPath refugee offer.
    • box8, WebHorizon, a copy of box5 as DediPath refugee offer.
    • ixp1, F4, Discord giveaway for first 5 orders.
    • ixp2, Cloudie, Discord giveaway for first 5 networks on ONIX.
    • ulx*, microLXC, applied per our qualifications.

    Free services are listed as ndn6 sponsors or AS200690 sponsors.
    These listings were voluntary, except for the frontpage link to Evolution.
    Otherwise, they would appear on hall of shame as usual, as for the case of VirmAche.

  • angstromangstrom Moderator

    @PulsedMedia said:

    @angstrom said: Where has he said this?

    Some of his very old posts, and it's been brought up lately too.

    The post was about class room someone making a post online which they should not (according to CCP), and that they have to arrest the whole class room instead of single individual because they could not track the person behind NAT.

    @angstrom said: I've called him out a couple of times in the past, but not because of IPv6. The fact that he maintains a list of which providers offer IPv6 and which do not is not harrassment

    and that list is not what i was talking about.
    He has harassed providers to give him free or near free services, so that he'd stop the harassment. This has been privately disclosed from another provider, that this happens. They just don't want his negativity, so they caved in to the extortion.

    This is very typical tactic, we've had many such demands, give me free service OR i'll post bad things about you.

    This happened with Yoursunny too, he couldn't get under cost service (18€/year for 1-2TB storage 1Gbps seedbox) renewed at that price despite his demands (energy crisis), so he has been harassing us everywhere he can, trying to cause as much negative PR as possible. He would have us rather involucrate than give up this 18€/year special deal.

    He lied about it too, as if we'd had cancelled it or demanded retroactively to pay more; Where we simply concluded come renewal time everyone has the choice to either renew at the new higher price, or cancel their service; we could not provide the service at the special price anymore thanks to inflation + energy crisis.

    Having to raise prices especially for the lowest cost dramatically in relation more, was not nice admittedly, but the choice was that either close the doors completely for the next 6-36months, closing everyone's service; Or increase prices at renewal time.

    If we don't keep calling his bullshit out, he will continue harassing us. Only after we (and some others) started calling him out (with jokes, puns etc. mind you!) has he stopped most of it, or at least the most outrageous lying.

    Mind you, i find it funny most of the time tho. Get to make so many pushup images! ;D

    Okay, listen, two things:

    • If you or any other user here feels harassed by @yoursunny or any other user, then please document the alleged harassment, letting me or another mod/admin know, and I'll (we'll) look into it. In this connection, I would simply note that a user may sometimes say something that a provider doesn't like to hear -- for example, "I'd like to pay less" -- which wouldn't count as harassment per se
    • Again, unless you have evidence that @yoursunny acts on behalf of the CCP, please refrain from further cheap CCP rhetoric/accusations
Sign In or Register to comment.