New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
@Nick_A Looks like it:
http://soluslabs.com/screenshots/datm.png
I just want to make sure. I mean, we have some features in SolusVM already that don't work even when there is a button for them.
Like...?
Central Backups unless I'm shamefully mistaken?
They actually work (for OpenVZ), i use it.
You installed it before it was removed from the installer, right?
@prometeus the legacy installers are avail to use.
Yes, I remember to have seen a discussion about that. But I was hoping to have the new one first or later. I need also to figure how much space is required for 30-40 nodes with the backup feature enabled... :P
Yes. I still have a copy of the installed i used, if you need it. But i think the older installers are also available for download from the soluslabs website/wiki.
@soluslabs Any news? I know that your timeline is everytime wrong, 18th is already over..
What happened to SolusVM 2, I'm guessing that has been scrapped now.
It's always been like this sadly. Maybe they live on a different calendar.
@concerto49 Mayan calender?
Is this still going through?
Any updates by chance?
18th March 2014 by any chance?
To be fair they probably had a few days interrupted work due to the WHMCS crap.
No idea. We've requested a feature regarding multiple LVM volumes on the same node (ProxMox can do it) and they said it would be in v2, is really taking them ages.
Really considering switching control panels or building our own, but no time for that right now. Sigh.
18th of April maybe?
I too would love to know when this is coming out. That feature would be extremely useful..
@Evixo Do you mean Secondary Disks for KVMs? There's a dialog box to provision them, but I don't think it automatically mounts them.
Nor should it really... Mounting disks isn't something that should be able to be done without having access "inside" the KVM VPS.
No. Let's say you have node capable of having 8 disks. Currently, 4 of them are in use. The CPU and network activity are at minimum, so I'd like to add 4 SSD's into the node to make use of the unused resources.
So, you create another volume group (let's say SSD) on the node. New VPSs should have the option to: 1) create the VPS on the "old" (traditional disks) volume group or 2) create them on the SSD volume group.
This would be quite excellent.
Quick update from my perspective.
Tested on Xen PV
3 Big issues right away.
1) No option to leave source server in tact, you absolutely have to delete one or the other which is ridiculous.
2) Did a test migration between DC's it said it was finished, I selected the Test server option which gives you a debug window to watch it boot etc, this flashed up for a second, the migration log got cleared and it deleted the source server, as it happens the destination (new) server not boot so had that have been a customer server it would have been destroyed.
3) It took the old DC's IP with it rather than assign a new one.
Every migration I have done so far using this system has failed in 1 way or another in Xen PV, I can only conclude that there is absolutely no way anyone actually tested it any any sort of environment that remotely resembles a live one before release, it is not BETA it is ALPHA at best.
@AnthonySmith - KVM and OpenVZ seem to work fine. I've only seen a couple fail on the destination server, but was able to restore them on the source server and move them manually.
EDIT: But yes, grabbing IPs from the destination network would be a good feature.
Well, I can't speak for your experience but I had no problems migrating just over 30 Xen-PV's. Not a single error/problem was encountered. Well, actually, I did have some issues using the migration feature with Chrome but after speaking to their support and switching to Firefox all was hunky dory! Could that be the issue? Hopefully they're working to sort the Chrome issue though.
Maybe, multiple people in here confirmed that IP's also get updated, obviously this is not the case though.
I am using chrome and have been informed about the debug window issue.
Who knows, everyone I have tried so far has had an issue of some description but it is only BETA I suppose, issues reported I hope you guys did the same.
I will stick to bash scripts I guess.
We have migrated around 300 KVM VMs without any issues between Jena and Frankfurt. KVM works fine here..IP change is not implemented, but we don´t need it.
What we do:
1) Start migration with compression and migrate data.
2) If the migration is finished, start debug boot, start vnc and check if all works.
3) Delete VM from the source node
I don't think keeping the old IP is really a bug. I would consider it a feature.
Ok, there could be a checkbox or select for "keep old IP / assign new IP".