New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
it should follow the routing table, i.e. if your target is on the private segment (via IP, hosts or dns) then trafflic flow the private route :-)
For the money they are collecting from us it is about time to release something helpful
@soluslabs really nice selection of music for your youtube videos.
On topic:
thanks for the feature update.
It would be better if you were to speak out; great for people with visual defect.
@soluslabs larger IP assignments for IPv6 per VM?
@soluslabs what about
IP Pool locking - lock a pool so no more vps get assigned to it?
@FRCorey You can do this by just reserving the IPs, not really a neccessary function.
Finally
Just run a little test on the bandwidth stats transferring a KVM virtual server with a 20GB hdd. Data on the virtual server is ~ 1GB and both servers are in the same DC over a 100mbit connection
http://bin.soluslabs.com/1039/61660915/raw/
@soluslabs
Does anybody know when SolusVM 2 will be out, or is it scraped, or ______?
Been a long time since the last update about SVM2
scrapped, they are just doing everything in minor releases/versions so it doesn't take a decade for 2.0
@soluslabs
Please make sure you release this feature with full api functions. :-)
@soluslabs
Wean the next release? do you have date? or ETA?
When? the weeks almost over
No, when we want all VPS's out of a pool, going in every night after terminations run and reserving them is a little excessive, when one checkbox will make sure once a ip is let go it's not reassigned out again.
@soluslabs also what about disk resizing for KVM/Xen?
qcow; I asked them about it. They said it can be an I/O problem. I believe I replied something like "not for me..." but I might be imagining that.
That was posted on the 22nd. It is now the 1st of March. Any updates on a beta release?
There was some delay, as far as I am aware they will release it sometime next week.
That´s soluslabs, big announcement but nothing real..solusvm v2 was already announced 2012, i don´t see anything. We changing to onapp when they provide the iso mount function.
Thats´s not really correct what they said. You lose few percent of performance but you get many features like, disk size, backups, migrations,..All big player using files to store the disks: VMware, xenserver, hyper-v, proxmox,..who do not know what soluslabs know
Is this actually live migration for KVM/Xen? Without shared data storage I'm not sure how it would be.
If KVM is just using LVM for the storage then packing up an entire logical volume and moving it over the network shouldn't be too complicated from a technical point of view. Once it's been dumped into a file then it's just like moving a vzdump from OpenVZ really.
Note that I really have no clue what I'm talking about so I might be way off the mark.
I think they using libvirt "virsh migrate" feature that can do live migration without any shared storage. Libvirt copy the whole hdd from node to node. It not heavy, you only must do is:
virsh migrate --live --domain kvmXXX qemu+ssh://10.0.0.xxx/system --copy-storage-all --verbose --persistent --undefinesource --change-protection
I have written a small php script in a few days to do it automatically. ./migrate.php node kvmXXX and the script do all for you. The hardest part of this script is to get all needed information from the solusvm db.
@Oliver -- Yeah, but that isn't live doing it that way.
@fileMEDIA -- Hrm, could be, I've never looked at 'virsh migrate', I wonder how it handles keeping the changes to the underlying LV in sync.
Ah OK - I hadn't really read the thread. If it's done another way that keeps things live then that's even better. :-)
@fileMEDIA, --copy-storage-all, interesting -- I wonder how it keeps changes in sync since it could potentially take a long time to copy the LV.
@soluslabs, is KVM/Xen migration done live?
It easy, you only have a downtime around 5 seks:
1) Libvirt creates on the remote node a new vm with the same config.
2) Libvirt takes a snapshot of the hdd.
3) Libvirt copy the whole disk from the snapshot to the remote node.
4) Libvirt syncs the ram between both nodes.
5) Libvirt halt the vm, copy differentes to the new node. (Around 5 seks..)
6) Libvirt resume the vm of the new host.
You can test it easy:
1) Create the same lvm drive on the remote host.
2) Firing this command: virsh migrate --live --domain kvmXXX qemu+ssh://10.0.0.xxx/system --copy-storage-all --verbose --persistent --undefinesource --change-protection
@jeff_lfcvps Sorry thats not correct: http://blog.allanglesit.com/2011/08/linux-kvm-management-live-migration-without-shared-storage/
We using it for months..
@fileMEDIA, Interesting -- I wonder how it keeps track of differences to have such a small downtime. That being said, I wouldn't consider 5 seconds of downtime to really be live but I guess it's close enough to not care for the majority of applications.
I am sure that downtime depends on how much activity there is on the server. I host some VPS which with vzdump can be transferred with a few seconds downtime. Some others with busy MySQL databases require much longer to sync whatever changes have been made before the "RAM" syncs...
It slices up the partition/block into 256kB blocks and pushes what has changed since the initial data push. Same way rsync does that with OpenVZ but then with files instead of blocks