Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Debian 9 KVM VPS - Filesystem became "read-only". What to do? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Debian 9 KVM VPS - Filesystem became "read-only". What to do?

2»

Comments

  • Whenever I had this problem it was due to the provider. (SAN network connectivity issues in the case of VPS, shit old hardware in the case of a WSI dedi.)

    The WSI dedi was hilarious, doing a dd test on it would cause it to go read only.

    Thanked by 2raindog308 WSS
  • raindog308 said: I was really just teasing @Amitz as I think he's the origin of "debian thx" :-)

    I am not, my dear @raindog308. But I love to cite it. ;-)

  • @bsdguy said:
    If poettering happened to be killed they should write "finally a problem solved" on his tombstone.

    Don't be coy, tell us what you really think.

  • @JustAMacUser said:

    @bsdguy said:
    If poettering happened to be killed they should write "finally a problem solved" on his tombstone.

    Don't be coy, tell us what you really think.

    Sorry, I can't help it, I just am a very shy person who wouldn't bring himself to say something in a direct way. g

    Thanked by 1JustAMacUser
  • WSSWSS Member

    @bsdguy said:
    If poettering happened to be killed they should write "finally a problem solved" on his tombstone.

    PulseAudio and systemd are such great ideas, though. In truth, the concepts were great, but the execution is worse than djb code coupled with an f(77)2c cross-compiler.

    That said, djb code looks like it ran through a preprocessor, already. Poettering code looks like it went through a gaggle of geese with IBS.

  • @WSS

    Even the idea of putting the code of a master like djb next to the mental jerks of a perfidous idiot strikes me as questionable.

    (And yes, djb has created i.a. his own "meta assembler" for his crypto work)

  • WSSWSS Member
    edited October 2017

    @bsdguy said:
    @WSS

    Even the idea of putting the code of a master like djb next to the mental jerks of a perfidous idiot strikes me as questionable.

    Why? Dan would first eviscerate him with words alone.. The problem with dan's code is that it's virtually unparseable by humans, whereas the mess that is systemd is merely incomprehensible.

  • @raindog308 said:

    angstrom said: If time-tested stability is the criterion, one might argue that CentOS 6 or 7 is a better/safer choice than Debian 9.

    I was really just teasing @Amitz as I think he's the origin of "debian thx" :-)

    Oh, okay, I wasn't familiar with that reference. :-)

    @bsdguy, @WSS: Guys, let's try to steer clear of systemd/Poettering bashing. :-)

  • WSSWSS Member

    @angstrom said:
    @bsdguy, @WSS: Guys, let's try to steer clear of systemd/Poettering bashing. :-)

    You do realize you just did the equivalent of chumming the waters, right? Right?

  • TheLinuxBugTheLinuxBug Member
    edited October 2017

    Amitz said: Are you implying that the "read-only" issue is caused by something that my provider - and only my provider - is responsible for?

    Actually, if they are running OnApp or something similar, in certain cases, as a few people have mentioned, where the backend network has a bad glitch or the hypervisors path to the data becomes overloaded for some reason it can result in exactly what you saw. If you had a file open on the server that needed writing you would lose that but in most cases when this happens the kernel makes it read only to protect the server. Unless there was a lot going on at the time you most likely could have just rebooted and been just fine to keep going, however, an fsck like you did was probably not a horrible idea.

    In OnApp setups and many Openstack setups while you can have a local volume (disk) they are all still bound to the VM by network, especially in cases where they are running HA which provides two mirrored disk images. So if you run into a bad network issue it can result in the disk images becoming unavailable to the VM and the kernel setting things read only to protect from loss. It isn't exactly common but I have seen it happen more than a handful of times to customer servers under certain circumstances.

    TL;DR;

    What I am saying is, this wasn't caused by something you did, it was definitely a glitch on the hypervisor node causing you to lose access to your storage volume temporarily, resulting in things being changed to read-only. Your VM doesn't have a physical volume attached so it isn't like you were seeing this from a physical error or something you could have done directly, it is just that the network backend connecting your storage failed temporarily in some way.

    Edit: worst case the provider may actually have a near dead drive in the hypervisor your volume stream is on, but this generally isn't the case, because on most OnApp/Cloudstack setups you will have 2 volume streams making the chance of you losing both at once extremely rare and more likely to be a network based problem instead.

    my 2 cents.

    Cheers!

  • Had an issue like this
    turns out my drive was full.
    Along with this at least ubuntu. Will allow itself to fill up pass a reserved amount, and then to get it to go below 0 you have to delete a lot of data. In my case after deleting like 30GB on a 250GB drive. The percentage full started to go down.

  • @angstrom said:
    @bsdguy, @WSS: Guys, let's try to steer clear of systemd/Poettering bashing. :-)

    Lennart groupie detected.

  • MaouniqueMaounique Host Rep, Veteran
    edited October 2017

    TheLinuxBug said: Cloudstack setups you will have 2 volume streams making the chance of you losing both at once extremely rare

    We have that and it still can happen, not losing the streams per se, but issues at the VM level such as readonly file system, due to buffers inadequate configurations as you cannot prepare for everything in advance. Salvatore knows best but he had to tweak buffers from default as it turned out they caused exactly this kind of error in some cases even if the link was far from saturated, like 10-20 %. With SAN storage timeouts can occur from various causes. It is an art to keep track of everything that can go wrong, clairvoyance to be able to prepare for all situations, especially since this is a new setup, radically different from the VMWare one you had for your corporate customers.

Sign In or Register to comment.