Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


LEGACY-03.LV.BUYVM.NET is down now. - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

LEGACY-03.LV.BUYVM.NET is down now.

124»

Comments

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited December 2018

    willie said: Do any of these patches claim to fix issues after the fact? I'd have expected them to stop corrupted data from being written, rather than fixing corruption that already happened.

    Prior to the patch there was some bug that it would allocate too much memory. If you were reading a multi TB image that's mostly empty (fresh image for instance) it ends up trying to allocate hundreds of GB of RAM, or more, to account for all of the free space.

    Before the patch I couldn't even get to the point of qemu-img starting a check, it would just instantly OOM.

    That memory issue has existed for years. I was finding mailing list entries from 2010 - 2012 related to it.

    EDIT - I should add, the #1 reason we went with QCOW was because we intended to offer snapshots/backups down the road for block storage (say, 2x the cost of your volume or whatever). I can still do them, it's just a little more janky now.

    Francisco

    Thanked by 1eol
  • @Francisco Are you switching existing volumes from QCOW to RAW, or do we need to create a ticket for that?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Daniel15 said:
    @Francisco Are you switching existing volumes from QCOW to RAW, or do we need to create a ticket for that?

    You can ticket and we can get you sorted in the morning.

    Francisco

  • letboxletbox Member, Patron Provider

    @Francisco said:

    willie said: Do any of these patches claim to fix issues after the fact? I'd have expected them to stop corrupted data from being written, rather than fixing corruption that already happened.

    Prior to the patch there was some bug that it would allocate too much memory. If you were reading a multi TB image that's mostly empty (fresh image for instance) it ends up trying to allocate hundreds of GB of RAM, or more, to account for all of the free space.

    Before the patch I couldn't even get to the point of qemu-img starting a check, it would just instantly OOM.

    That memory issue has existed for years. I was finding mailing list entries from 2010 - 2012 related to it.

    EDIT - I should add, the #1 reason we went with QCOW was because we intended to offer snapshots/backups down the road for block storage (say, 2x the cost of your volume or whatever). I can still do them, it's just a little more janky now.

    Francisco

    Are you using glusterfs? After a node/brick crash, GlusterFS does a full rsync to make sure data is consistent. This can take a very long time with large files, so this backend is not suitable to store large VM images or they fix that issue? and seems it supported raw qcow2 and vmdk since ceph supported raw only so seems there is reason to do so.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @key900 said:

    @Francisco said:

    willie said: Do any of these patches claim to fix issues after the fact? I'd have expected them to stop corrupted data from being written, rather than fixing corruption that already happened.

    Prior to the patch there was some bug that it would allocate too much memory. If you were reading a multi TB image that's mostly empty (fresh image for instance) it ends up trying to allocate hundreds of GB of RAM, or more, to account for all of the free space.

    Before the patch I couldn't even get to the point of qemu-img starting a check, it would just instantly OOM.

    That memory issue has existed for years. I was finding mailing list entries from 2010 - 2012 related to it.

    EDIT - I should add, the #1 reason we went with QCOW was because we intended to offer snapshots/backups down the road for block storage (say, 2x the cost of your volume or whatever). I can still do them, it's just a little more janky now.

    Francisco

    Are you using glusterfs? After a node/brick crash, GlusterFS does a full rsync to make sure data is consistent. This can take a very long time with large files, so this backend is not suitable to store large VM images or they fix that issue? and seems it supported raw qcow2 and vmdk since ceph supported raw only so seems there is reason to do so.

    No, something else. Data is single homed to a node, it isn't chunked around or anything like that.

    Francisco

Sign In or Register to comment.