Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Sizing for storage VPS? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Sizing for storage VPS?

13»

Comments

  • @Damian said: Yes. RAID storage is already in the works, and will be sold. Multiple people have commented in this thread that they do not need RAID storage and do not need it's associated price premium, so I'm looking into accommodating those individuals, because I like money. Read from

    So then what would happen to the data when a disk goes bad? Data gone forever? Sounds like it's more trouble than it's worth with large data sets. Need some type of redundancy right?

  • @Francisco
    /dev/sdg1 goes to /vz/root/100/storage
    /dev/sdg2 goes to /vz/root/101/storage

    You prepartition the drives, that's how you limit the sizes.
    For instance you could partition one 2TB HDD to one (real) 1TB partition + one 500GB partition + one 250GB partition.

  • @Francisco said: I hope he plans to only ever sell single drives.

    You can't get single drives cheap enough to justify selling a single drive @ $15/yr.

  • @Corey said: So then what would happen to the data when a disk goes bad? Data gone forever?

    Yes, the data disappears.

    @Corey said: Need some type of redundancy right?

    The users who pointed this out said:

    @joepie91 said: I'd recommend offering both RAID-redundant and non-RAID-redundant storage space - I'd imagine I'm not the only one that has his own redundancy solution set up :)

    @rajprakash said: I'm all over the non-RAID protected storage! Like some others, I only use it as a mirror for an already redundant solution I have setup.

    I think there were a few others but I can't find their posts.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Corey said: You can't get single drives cheap enough to justify selling a single drive @ $15/yr.

    Read back over the whole thing, your comment doesn't relate to this at all :)

    @rds100 said: You prepartition the drives, that's how you limit the sizes.

    For instance you could partition one 2TB HDD to one (real) 1TB partition + one 500GB partition + one 250GB partition.

    Yep, just really screwy is all :)

    Francisco

  • DamianDamian Member
    edited October 2012

    @rds100 said: /dev/sdg1 goes to /vz/root/100/storage

    /dev/sdg2 goes to /vz/root/101/storage

    You prepartition the drives, that's how you limit the sizes.

    For instance you could partition one 2TB HDD to one (real) 1TB partition + one 500GB partition + one 250GB partition.

    That would end up as a manual management hell, unfortunately. Also, doesn't linux have a partition limit of 16 partitions per device or something?

    As @Francisco pointed out, OVZ doesn't handle having quotas on external-bound mounts, so that's why I was looking at KVM instead.

  • CoreyCorey Member
    edited October 2012

    @Francisco said: Read back over the whole thing, your comment doesn't relate to this at all :)

    It was relating directly to what your comment was....

    I've read this whole thing... and all I can think the whole time I am reading it is 'WHY'......

    Say the disk fails and your backup customer's data is lost...... wouldn't they still expect you to be able to restore the data? Even if they don't then you are going to have to do manual work because their vps will then be broken?

    Wouldn't it be better to spend the extra money so you don't have to go into manual management hell?

  • @Damian said: That would end up as a manual management hell, unfortunately. Also, doesn't linux have a partition limit of 16 partitions per device or something?

    That's where LVM comes to play :) LVM is basicly a glorified partitioner. If 16 partitions on a single HDD are not enough then you could create PV on this HDD, then a VG containing only that PV, and then as many LVMs as you can fit in this VG.
    But if you just sell 1TB, 500GB and 250GB sized storage plans this would not be necessary.
    And yes, i agree management would be... unpleasant.
    Also i am not sure what's the limit of the number mounted file systems under linux.

  • @Corey said: Say the disk fails and your backup customer's data is lost...... wouldn't they still expect you to be able to restore the data?

    There would be many warnings that this is non-RAID storage. I'm not sure any of the backup/storage providers have backups on their backup/storage systems. But do correct me if i'm wrong on that...

    @Corey said: I've read this whole thing... and all I can think the whole time I am reading it is 'WHY'......

    I was quite surprised too.

    @Corey said: Even if they don't then you are going to have to do manual work because their vps will then be broken?

    Their VPS should still work properly, because the HN will be on something simple like RAID 1, which will be the root partition, and the VPS containers would still be installed to /vz/private/* as normal. The difference is that the inside the VPS container, the user would have another directory like /storage, mounted to the root of the VPS container, and that /storage directory is something like /mnt/lvm/vg001 on the HN.

  • http://www.64u.com/ offer 30gb of SSD storage and 100gb of RAID-protected hard drive storage on the same container. I think they effect this by having vSphere show two hard drives to the container.

    More discussion about multiple hard drive mounts in these threads:
    http://www.lowendtalk.com/discussion/5386/dual-ssd-and-hdd-vps
    http://www.lowendtalk.com/discussion/5377/which-providers-offer-lebs-with-combo-of-ssd-and-non-ssd-local-storage

  • @Damian said: Their VPS should still work properly, because the HN will be on something simple like RAID 1, which will be the root partition, and the VPS containers would still be installed to /vz/private/* as normal. The difference is that the inside the VPS container, the user would have another directory like /storage, mounted to the root of the VPS container, and that /storage directory is something like /mnt/lvm/vg001 on the HN.

    Ok but then you would have to manually go in and remove the VG, replace the disk, and create a new VG and assign to container?

  • @Corey said: Ok but then you would have to manually go in and remove the VG, replace the disk, and create a new VG and assign to container?

    There's a way to have the VG drop the PV and continue on like nothing happened. A combination of http://osdir.com/ml/linux-lvm/2010-09/msg00007.html and http://www.centos.org/docs/5/html/5.2/Cluster_Logical_Volume_Manager/mdatarecover.html but I'm having a hard time finding the exact instructions.

  • @Damian said: VG drop the PV and continue on like nothing happened.

    But then it would still have to be assigned to a new PV for them to have their storage again right? (manually)

  • DamianDamian Member
    edited October 2012

    @Corey said: But then it would still have to be assigned to a new PV for them to have their storage again right? (manually)

    It shouldn't, the VPS would be rebooted and they would just start writing to a different section of the VG. The issue would be if the server is sold to 100% capacity, is actually filled to 100% capacity, and a disk dies. The VPSes that have files on the drive that died would find their files missing, but with other files still existant (because those files live on a different drive in the stripe), and they would be unable to write new files until the drive is replaced and the VG re-extended.

    Alternatively, I could specify that a VG lives on a specific PV or PVs, and then it's confined to that PV, and I don't have to worry about a VPS losing "these files here, but not those files there", but that returns to manual management hell.

  • @Damian said: It shouldn't, the VPS would be rebooted and they would just start writing to a different section of the VG. The issue would be if the server is sold to 100% capacity, is actually filled to 100% capacity, and a disk dies. The VPSes that have files on the drive that died would find their files missing, but with other files still existant (because those files live on a different drive in the stripe), and they would be unable to write new files until the drive is replaced and the VG re-extended.

    Ok I see thank you!

  • Additionally, i'm not married to the idea of using LVM, OVZ, KVM, or anything else. I'm open to different/better ideas, if anyone has them.

  • joepie91joepie91 Member, Patron Provider
    edited October 2012

    @Corey said: Say the disk fails and your backup customer's data is lost...... wouldn't they still expect you to be able to restore the data? Even if they don't then you are going to have to do manual work because their vps will then be broken?

    I definitely wouldn't. If I'm using something like Tahoe-LAFS, a lost VPS is really not a big deal - I just repair the files and I'm good to go again. I've seen my storage grid handle larger failures than one server (say, something along the lines of a continent effectively going offline during some routing issues) and still work properly.

  • I wonder how valuable from customer's point of view would be just some FTP space for backups, instead of a separate VPS.
    That would allow the provider to get creative with ZFS and such.

    Of course ftp is insecure, so replace ftp with sftp.

  • A FTP-only thing would be useless to me.

    I like a small usable VPS on the storage. So things like rsync can be used from that side as well as remote. Sometimes better to run on one side or another (especially where non-symmetrical lousy upload sides -- think consumer home connections).

  • @Damian

    I would definitely be interested in a low cost, no redunancy storage solution.

    I really hope you can come up with something soon because there is a gap in the market for such a service.

  • FreeBSD with jail per user and storage on zfs?

  • @craigb said: FreeBSD with jail per user and storage on zfs?

    Might be a viable idea. I don't have a lot of experience with FreeBSD jails, anyone else able to chime in on this?

  • prometeusprometeus Member, Host Rep

    what prices are you trying to achieve?

  • @prometeus said: what prices are you trying to achieve?

    Cheap :) I've already priced RAID storage at 4.1 cents per GB, so non-RAID would need to be less, but I don't have a figure in mind.

  • @damian ezjail makes jail creation and maintenance very simple: http://erdgeist.org/arts/software/ezjail/. No idea about hooks into provider front-end. ZFS is fantastic - gives a lot of flexibility about how you set things up...

Sign In or Register to comment.