Tag Archives: virtualisation

Session notes from the Virtualization microconf at the 2012 LPC

The Linux Plumbers Conf wiki seems to have made the discussion notes for the 2012 conf read-only as well as visible only to people who have logged in.  I suspect this is due to the spam problem, but I’ll put those notes here so that they’re available without needing a login.  The source is here.

These are the notes I took during the virtualization microconference at the 2012 Linux Plumbers Conference.

Continue reading

About Random Numbers and Virtual Machines

Several applications need random numbers for correct and secure operation.  When ssh-server gets installed on a system, public and private key paris are generated.  Random numbers are needed for this operation.  Same with creating a GPG key pair.  Initial TCP sequence numbers are randomized.  Process PIDs are randomized.  Without such randomization, we’d get a predictable set of TCP sequence numbers or PIDs, making it easy for attackers to break into servers or desktops.

 

On a system without any special hardware, Linux seeds its entropy pool from sources like keyboard and mouse input, disk IO, network IO, and any other sources whose kernel modules indicate they are capable of adding to the kernel’s entropy pool (i.e .the interrupts they receive are from sufficiently non-deterministic sources).  For servers, keyboard and mouse inputs are rare (most don’t even have a keyboard / mouse connected).  This makes getting true random numbers difficult: applications requesting random numbers from /dev/random have to wait for indefinite periods to get the randomness they desire (like creating ssh keys, typically during firstboot.).

 

Continue reading

Avi Kivity Stepping Down from the KVM Project

Avi Kivity giving his keynote speech

Avi Kivity announced he is stepping down as (co-)maintainer of the KVM Project at the recently-concluded KVM Forum 2012 in Barcelona, Spain.  Avi wrote the initial implementation of the KVM code back at Qumranet, and has been maintaining the KVM-related kernel and qemu code for about 7 years now.

Continue reading

Virtualization at the Linux Plumbers Conference 2012

The 2012 edition of the Linux Plumbers Conference concluded recently.  I was there, running the virtualization microconference.  The format of LPC sessions is to have discussions around current as well as future projects.  The key words are ‘discussion’ (not talks — slides are optional!) and ‘current’ and ‘future’ projects — not discussing work that’s already done; rather discussing unsolved problems or new ideas.  LPC is a great platform for getting people involved in various subsystems across the entire OS stack in one place, so any sticky problems tend to get resolved by discussing issues face-to-face.

Continue reading

FUDCon Pune: My talk on ‘Linux Virtualization’

My second talk at FUDCon Pune was on Virtualization (slides) on day 2.  While I had registered the talk well in advance, I wasn’t quite sure what really to talk about: should I talk about the basics of virtualization?  Should I talk about what’s latest (coming up in Fedora 16)?  Should I talk about how KVM works in detail?  My first talk on git had gone well, and as expected for this FUDCon, majority of the participants were students.  Expecting a similar student-heavy audience for the 2nd talk as well, I decided on discussing the basics of the Linux Virt Stack.  Kashyap had a session lined up after me on libvirt, so I thought I could give an overview of virt-manager, libvirt, QEMU and Linux (KVM).

And since my registered talk title was ‘Latest in Linux Virtualization’, I did leave a few slides on upcoming enhancements in Fedora 16 (mostly concentrating on the QEMU side of things) at the end of the slide deck, to cover those things if I had time left.

As with the previous git talk, I didn’t get around to making the slides and deciding on the flow of the talk till the night before the day of the talk, and that left me with much less sleep than normal.  The video for the talk is available online; I haven’t seen it myself, but if you do, you’ll find I was almost sleep-talking through the session.

To make it interactive as well as keep me awake, I asked the audience to stop me and ask questions any time during the talk.  What was funny about that was the talk was also being live streamed, and the audio signal for the live streaming was carried via one mic and the audio stream for the audience as well as the recorded talk was on a different mic.  So even though the audience questions were taken on the audience mic, I had to repeat the questions for the people who were catching the talk live.

I got some feedback later from a few people — I missed to introduce myself, and I should have put some performance graphs in the slides, as almost all users would be interested in KVM performance vs other hypervisors.  Both good points.  The performance slides I hadn’t thought about earlier, I’ll try to incorporate some such graphs in future presentations.  Interestingly, I hadn’t also thought of introducing myself.  Previously, I was used to someone else introducing me and then me picking up from there.  At the FUDCon, we (the organisers) missed on getting speaker bios, and didn’t have volunteers introduce each speaker before their sessions.  So no matter which way I look at it, I take the blame as speaker and organiser for not having done this.

There was some time before my session to start and there were a few people in the auditorium (the room where the talk was to be held), so Kashyap thought of playing some Fedora / FOSS / Red Hat videos.  (People generally like the Truth Happens video, and that one was played as well.)  These, and many more are available on the Red Hat Videos channel on YouTube. There was also some time between my session and Kashyap’s (to allow for people to move around, take a break, etc.), so we played the F16 release video that Jared gave us.

Overall, I think the talk went quite well (though I may have just dreamed that).  I tried to stay awake for Kashyap’s session on libvirt to answer any questions directed my way; I know I did answer a couple of them, so I must have managed to stay up.

Communication between Guests and Hosts

Guest and Host communication should be a simple affair — the venerable TCP/IP sockets should be the first answer to any remote communication.  However, it’s not so simple once some special virtualisation-related constraints are added to the mix:

  • the guest and host are different machines, managed differently
  • the guest administrator and the host administrator may be different people
  • the guest administrator might inadvertently block IP-based communication channels to the host via firewall rules, rendering the TCP/IP-based communication channels unusable

The last point needs some elaboration: system administrators want to be really conservative in what they “open” to the outside world.  In this sense, the guest and host administrators are actively hostile to each other.  Also, rightly, neither should trust each other, given that a lot of the data stored in operating systems are now stored within clouds and any leak of the data could prove disastrous to the administrators and their employers.

So what’s really needed is a special communication channel between guests and hosts that are not susceptible to being blocked out by guests or hosts as well as being a very special-purpose low-bandwidth channel that doesn’t look to re-implement TCP/IP.  Some other requirements are mentioned on this page.

After several iterations, we settled on one particular implementation: virtio-serial.  The virtio-serial infrastructure rides on top of virtio, a generic para-virtual bus that enables exposing custom devices to guests.  virtio devices are abstracted enough so that guest drivers need not know what kind of bus they’re actually riding on: they are PCI devices on x86 and native devices on s390 under the hood.  What this means is the same guest driver can be used to communicate with a virtio-serial device under x86 as well as s390.  Behind the scenes, the virtio layer, depending on the guest architecture type, works with the host virtio-pci device or virtio-s390 device.

The host device is coded in qemu.  One host virtio-serial device is capable of hosting multiple channels or ports on the same device.  The number of ports that can ride on top of a virtio-serial device is currently arbitrarily limited to 31, but one device can very well support 2^31 ports.  The device is available since upstream qemu release 0.13 as well as in Fedora from release 13 onwards.

The guest driver is written for Linux and Windows guests.  The API exposed includes open, read, write, poll, close calls.  For the Linux guest, ports can be opened in blocking as well as non-blocking modes.  The driver is included upstream from Linux kernel version 2.6.35.  Kernel 2.6.37 will also have asynchronous IO support — ie, SIGIO will be delivered to interested userspace apps whenever the host-side connection is established or closed, or when a port gets hot-unplugged.

Using the ports is simple: when using qemu from the command line directly, add:

-chardev socket,path=/tmp/port0,server,nowait,id=port0-char 
-device virtio-serial 
-device virtserialport,id=port1,name=org.fedoraproject.port.0,chardev=port0-char
this creates one device with one port and exposes to the guest the name ‘org.fedoraproject.port.0‘.  Guest apps can then open /dev/virtio-ports/org.fedoraproject.port.0 and start communicating with the host.  Host apps can open the /tmp/port0 unix domain socket to communicate with the guest.  Of course, there are other qemu chardev backends that can be used other than unix domain sockets.  There also is an in-qemu API that can be used.
More invocation options and examples are given in the invocation and how to test sections.

There is sample C code for the guest as well as sample python code from the test suites.  The original test suite, written to verify the functionality of the user-kernel interface, will in the near future be moved to autotest, enabling faster addition of more tests and tests that not just check for correctness, but also regressions and bugs.

virtio-serial is already in use by the Matahari, Spice, libguestfs and Anaconda projects.  I’ll briefly mention how Anaconda is going to use virtio-serial: starting Fedora 14, guest installs of Fedora will automatically send Anaconda logs to the host if a virtio-serial port with the name of ‘org.fedoraproject.anaconda.log.0‘ is found.  virt-install is modified to create such a virtio-serial port.  This means debugging early anaconda output will be easier with the logs available on the host (and not worrying about guest file system corruptions during install or network drivers not available before a crash).

Further use: There are many more uses of virtio-serial, which should be pretty easy to code:

  • shutting down or suspending VMs when a host is shut down
  • clipboard copy/paste between hosts and guests (this is under progress  by the Spice team)
  • lock a desktop session in the guest when a vnc/spice connection is closed
  • fetch cpu/memory/power usage rates at regular intervals for monitoring

Virtualisation (on Fedora)

A few volunteers from India associated with the Fedora Project wrote articles for Linux For You‘s March 2010 Virtualisation Special. Those articles, and a few others, are put up on the Fedora wiki space at Magazine Articles on Virtualization. Thanks to LFY for letting us upload the pdfs!

We’re always looking for more content, in the form of how-tos, articles, experiences, tips, etc., so feel free to upload content to the wiki or blog about it.

We also have contact with some magazine publishers so if you’re interested in writing for online or print magazines, let the marketing folks know!

Comparison of File Systems And Speeding Up Applications

Update: I’ve done a newer article on this subject at http://log.amitshah.net/2009/04/re-comparing-file-systems.html that removes some of the deficiencies in the tests mentioned here and has newer, more accurate results along with some new file systems.

How should one allocate disk space for a file for later writing? ftruncate() (or lseek() followed by write()) create sparse files, not what is needed. A traditional way is to write zeroes to the file till it reaches the desired file size. Doing things this way has a few drawbacks:

  • Slow, as small chunks are written one at a time by the write() syscall
  • Lots of fragmentation

posix_fallocate() is a library call that handles the chunking of writes in one batch; the application need not have to code his/her own block-by-block writes. But this still is in the userspace.

Linux 2.6.23 introduced the fallocate() system call. The allocation is then moved to kernel space and hence is faster. New file systems that support extents make this call very fast indeed: a single extent is to be marked as being allocated on disk (as traditionally blocks were being marked as ‘used’). Fragmentation too is reduced as file systems will now keep track of extents, instead of smaller blocks.

posix_fallocate() will internally use fallocate() if the syscall exists in the running kernel.

So I thought it would be a good idea to make libvirt use posix_fallocate() so that systems with the newer file systems will directly benefit when allocating disk space for virtual machines. I wasn’t sure of what method libvirt already used to allocate the space. I found out that it allocated blocks in 4KiB sized chunks.

So I sent a patch to the libvir-list to convert to posix_fallocate() and danpb asked me about what the benefits of this approach were and also asked about using alternative approaches if not writing in 4K chunks. I didn’t have any data to back up my claims of “this approach will be fast and will result in less fragmentation, which is desirable”. So I set out to do some benchmarking. To do that, though, I first had to make some empty disk space to create a few file systems of sufficiently large sizes. Hunting for a test machine with spare disk space proved futie, so I went about resizing my ext3 partition and creating about 15 GB of free disk space. I intended to test ext3, ext4, xfs and btrfs. I could use my existing ext3 partition for the testing, but that would not give honest results about the fragmentation (existing file systems may already be fragmented, causing big new files surely to be fragmented whereas on a fresh fs, I won’t run into that risk).

Though even creating separate partitions on rotating storage and testing file system performance won’t give perfectly honest results, I figured if the percentage difference in the results was quite high, that won’t matter. I grabbed the latest Linus tree and the latest dev trees for the userspace utilities for all the file systems and created about 5GB partitions for each fs.

I then wrote a program that created a file, allocated disk space and closed it and calculate the time taken in doing so. This was done multiple times for different allocation methods: posix_fallocate(), mmap() + memset() and writing zeroes in 4096 byte chunks and 8192 byte chunks.

So I had four methods of allocating files and 5G partition size. So I decided to check the performance by creating 1GiB file size for each allocation method.

The program is here. The results, here. The git tree is here.

I was quite surprised seeing poor performance for posix_fallocate() on ext4. On digging a bit, I realised mkfs.ext4 didn’t create it with extents enabled. I reformatted the partition, but that data was valuable to have as well. Shows how much a file system is better with extents support.

Graphically, it looks like this:
Notice that ext4, xfs and btrfs take only a few microseconds to complete posix_fallocate().

The number of fragments created:

btrfs doesn’t yet have the ioctl implemented for calculating fragments.

The results are very impressive and the final patches to libvirt were finalised pretty quickly. They’re now in the development branch libvirt. Coming soon to a virtual machine management application near you.

Use of posix_fallocate() will be beneficial to programs that know in advance the size of the file being created, like torrent clients, ftp clients, browsers, download managers, etc. It won’t be beneficial in the speed sense, as data is only written when it’s downloaded, but it’s beneficial in the as-less-fragmentation-as-possible sense.