New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Archlinux, I think it's a good choice for development.
Ubuntu works wonders
What ever you are most comfortable with.
Hi Sleddog
For a corp solution http://www.proxmox.com/products/proxmox-ve.
Easy to install, very versatile, can do openvz and kvm on the same box, the storage options are great and live migration as well.
This.
I use ArchLinux, which has the advantage of running the most recent kernel and qemu-kvm release, so I get all the latest (but usually undocumented) KVM features. But really it's just that I'm most comfortable with Arch.
The only thing I'd recommend looking into when checking if your favourite distribution is suitable is if Libvirt will be easy to install (like whether it's provided in the distribution's package repositories). A lot of features and workarounds for qemu-kvm aren't documented anywhere except for libvirt's source code, so libvirt is almost necessary when running qemu-kvm if you don't want to do tons of research and code-diving. On the other hand, it means writing XML which might tip the balance back toward the research and code-diving
But like miTgiB said, it all comes down to what you're most comfortable with because a distribution you know how to configure will work better than one with the latest features that you can't use or can't keep up-to-date and stable.
Thanks guys, very helpful. With these suggestions I'm down to either CentOS 6 or Debian 6. I'll probably give CentOS a whirl first and see how it goes.
Hmmm... I see CentOS 6.1 is still not released. Perhaps I'll try Debian
Scientific Linux 6.1 has been released though. But honestly, I have CentOS 6.0 KVM nodes that have been up since the week CentOS 6.0 was released. If I wasn't tied to SolusVM, I would probably use Debian 6 even though I am much more comfortable in CentOS. I have a Debian KVM node at home which has given me zero problems in the year it's been running.
Debian 6 forces use of lilo instead of grub for software raid. I'm not thrilled with that. And software raid is the only option budget-wise at the moment (I've used it for years so raid-management isn't an issue).
And the report from some providers (not you, miTgiB) of CentOS 6 issues is also bothersome.
So mulling it over, I think maybe CentOS 5.x. When it EOLs in March 2014 it'll be time for new hardware anyway. The stuff I'm using is already ~5 years old. Good time for a new build and hopefully the budget will be brighter
+1. I use Arch everywhere.
If you need to use software raid.
http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny
So you will install Lenny with the raid you need and then add the pve repo to install proxmox.
CentOS6 or CentOS5 + KVM + Good RAID setup = Pure Ownage on customized kernels or non standard Linux dom's (Windows,BSD,Arch) with vt-d enabled modern processors.
With a good RAID setup, you can install Windows or CentOS on a KVM VPS in less than 10 minutes and 5 minutes respectively.
If you are using basic Linux guests, XenPV with pygrub will probably still give you the best performance + standard linux flavor compatibility.
Thanks for that. Though I'm not sure about installing Lenny at this time. Updates might cease in a year? http://wiki.debian.org/DebianLenny
And thanks Kiloserve, though Windows is not a factor
Hi Sleddog
The Lenny install will be rock solid as I am sure you know "Debian Stable" is.
With PVE 2.0 on the horizon there will be an straight repo based Squeeze upgrade soon.
Give it a kick, I can promise you, you won't be disappointed.
KSM works like a charm.
Just my 2 cents.
Well I'm up and running with CentOS 5.7 & KVM. Took a while to figure out the network bridge setup but it's working now. Successfully created a VM using virt-install and installed Debian 6. It can be accessed directly on the LAN and has internet access. Lots to learn...
Good to hear.
Have fun.
The lenny install works just fine with squeeze
Francisco
Only real hiccup had nothing to do with KVM -- there's a new bug in CentOS 5.7 netinstall whereby the grub bootloader is not setup when using software RAID 1... rescue mode, install it manually works
Good to know... maybe I'll start over
Can't get virtio working. I built a new vm like this:
virt-install --name ve3 --ram 128
--disk path=/var/lib/libvirt/images/ve3,size=5,bus=virtio
--network bridge:br0 --os-variant virtio26 --vnc
--cdrom=/var/lib/libvirt/isos/debian-6.0.3-i386-netinst.iso
Then connected via VNC and successfully completed the installation of Debian 6.
Bu then when the VM is restarted all I get is:
Booting from Hard Disk...
Boot failed: could not read the boot disk
FAILED: No bootable device.
Any ideas?
Hi Sleddog
Use IDE for iso's.
Not sure I follow you... the Deb installation used virtio and installed to /dev/vda1
But after restarting the guest it can't see that boot device.
I can edit the config and change to 'ide' and then it boots, but it isn't using virtio then.
Sorry I missed the bit about it finishing the install.
I thought the iso was not booting and going straight to the disk on trying to install.
Bit brain dead on this side.
I have found that at times if I created the disk as ide or scsi and try to use it as a virtio disk it can't be read. Not sure if this applies to you tough.
Got it to boot with virtio from /dev/vda1 by editing the config:
Change:
< source file='/var/lib/libvirt/images/ve3.img'/ >
To:
< source file='/var/lib/libvirt/images/ve3.img,if=virtio,boot=on'/ >
But unfortunately there's a problem...:
Without virtio, in a vm:
With virtio, in the new, now-bootable vm:
As you can see I eventually Ctr-C'ed it to cancel.
@sleddog: It sure sounds a lot like https://bugzilla.redhat.com/show_bug.cgi?id=514899 but I can't imagine your qemu-kvm release is that old.
Edit: Never mind, sounds like you got it to boot.
Yes it boots, but disk performance with or without virtio is at best 4 or 5 time slower than the 'bare metal' host node.
I'm about ready to give OpenVZ a try....
Is your .img a full sized file or is it a cow2 file?
Francisco
Some other things to try as well if you are not using hardware raid
In VM echo noop > /sys/block/vda/queue/scheduler
On HN echo deadline > /sys/block/sda/queue/scheduler
I first came across this http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaat/liaatbpscheduleroverview.htm
But some discussion resulted in noop and deadline instead of straight noop. I have these in /etc/rc.local since they are not persistent through restarts.
@miTgiB
That is very interesting indeed, since the HN has a hardware raid card, and checking further
[root@kvm02 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]