Hacker News new | past | comments | ask | show | jobs | submit login

While an over simplification, here is the context

There are two large hypervisors in the Linux world.

Xen, which extends the kernel to support virtual CPUs with time slices.

KVM, which assigns each virtual core a process that uses the Linux scheduler.

When a hardware vm vcpu core is preempted there is vmexit call that has to reset registers etc... and it is expensive.

Xen is what legacy AWS instances ran on and has advantages for being fair to guests is an easier task.

KVM has the advantage of gaining the benefits of the Linux scheduler which is red black tree based and well optimized.

When a new CPU comes out for example, KVM gains support from the upstream while Xen has to support it themselves.

Once technology like cgroups improved the benefits of letting your thread complete and not be preempted due to the time slice expiring avoided the cost of vmexit.

In theory, leveraging the inherently optimized core Linux features is what will also benefit virtualbox.

Most people who use KVM are using an abstraction layer like libvirt that hides how it is implemented.

In fact if you look at the processes you will see qemu even if KVM is how it is implemented.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: