New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Prior to the patch there was some bug that it would allocate too much memory. If you were reading a multi TB image that's mostly empty (fresh image for instance) it ends up trying to allocate hundreds of GB of RAM, or more, to account for all of the free space.
Before the patch I couldn't even get to the point of
qemu-img
starting a check, it would just instantly OOM.That memory issue has existed for years. I was finding mailing list entries from 2010 - 2012 related to it.
EDIT - I should add, the #1 reason we went with QCOW was because we intended to offer snapshots/backups down the road for block storage (say, 2x the cost of your volume or whatever). I can still do them, it's just a little more janky now.
Francisco
@Francisco Are you switching existing volumes from QCOW to RAW, or do we need to create a ticket for that?
You can ticket and we can get you sorted in the morning.
Francisco
Are you using glusterfs? After a node/brick crash, GlusterFS does a full rsync to make sure data is consistent. This can take a very long time with large files, so this backend is not suitable to store large VM images or they fix that issue? and seems it supported raw qcow2 and vmdk since ceph supported raw only so seems there is reason to do so.
No, something else. Data is single homed to a node, it isn't chunked around or anything like that.
Francisco