The 2012 edition of the Linux Plumbers Conference concluded recently. I was there, running the virtualization microconference. The format of LPC sessions is to have discussions around current as well as future projects. The key words are ‘discussion’ (not talks — slides are optional!) and ‘current’ and ‘future’ projects — not discussing work that’s already done; rather discussing unsolved problems or new ideas. LPC is a great platform for getting people involved in various subsystems across the entire OS stack in one place, so any sticky problems tend to get resolved by discussing issues face-to-face.
The virt microconf had A LOT of submissions: 17 topics to be discussed in a standard time slot of 2.5 hours for one microconf track. I asked for a ‘double track’, making it 5 hours of time for 17 topics. Still difficult, but reducing a few topics to ‘lightning talks’, we could get a somewhat decent 20 minutes per topic. I contemplated between rejecting topics and thus increasing the time each discusison would get, or keeping all the topics, and asking the people to wrap up in 20 minutes. I went for the latter — getting more stuff discussed (and hence, more problems / issues ‘out there’) is a better use of time, IMO. That would also ensure that people stay on-topic and focussed.
There was also a general change in the way microconfs were scheduled this time: the microconfs were not given a complete 2.5-hour slot. Rather, they were given 3 slots of 45 minutes each. This helped the schedule pages to show the topics of the microconfs being discussed at that time, so the attendees could pick and choose the discussion they wanted to attend, rather than seeing a generic ‘Virtualization Micrconf’ slot. I think this was a good idea. Individual microconf owners could request for modifications to this scheme, of course, and some microconfs just chose to run the entire session in one slot, or reserved one whole day in a room, etc. For the virt microconf, I went with six separate slots, scheduled in a way to avoid conflicts with other virt-related topics in other sessions, giving a total of 4.5 hours for 17 topics.
I segregated the CFP submissions so I could schedule related discussions in one slot, to avoid jumping between subjects and to also help concentrate on specifics in an area. Two submissions, one on security and one on storage, were by themselves, so I clubbed them into one ‘security and storage‘ session. The others were nicely aligned, so we could have ‘x86‘, ‘MM‘, ‘ARM‘, ‘Networking‘ and ‘lightning talks’ topics in separate slots. Since there were 4 network-related talks, I asked for a double slot (two 45-min slots back-to-back), and clubbed the lightning talks in the same session, which was scheduled to be the last session for the virt microconf.
Given this, I would say the microconf went quite well — the notes and slides are up at the LPC 2012 virt microconf wiki, and we could get good discussions going for most of the topics, given the time constraints. Of course, a major benefit of going to conferences is to meet people outside of the sessions, in the hallways and at social events, and the discussions continued there as well. I did bank on this extra time we would have into the ‘reject vs take all of them’ problem mentioned earlier. From what I heard, the beer at the social events failed to stop technical discussions, so it all worked out for the best.
Each microconf owner (or a representative) had to do a short summary at the end of the LPC, for the benefit of the people not present for some sessions. I did the virt summary in roughly these words:
We had a quite productive virtualization microconfierence. We received a lot of submissions, and accepted them all, which meant we had to limit the time for each discussion in the slots, but we could divide the slots by a general topic, effectively increasing the discussion time for the larger topic.
We had a healthy representation from the KVM as well as Xen sides. For example, in the MM topic, we discussed NUMA awareness for KVM as well as Xen. Dario Faggioli presented the Xen side, and Andrea Arcangeli spoke on the Linux/KVM side. Andrea spoke about AutoNUMA. It has been contentious on the mailing lists, and from the Kernel Summit discussions, it looked like some agreement will be reached soon. Xen uses a similar approach to AutoNUMA, and they would end up pushing the patches soon as well. Daniel Kiper spoke about integrating the various balloon drivers in the kernel to remove code duplication.
Both AMD and Intel publically announced new hardware features for interrupt virtualization for the first time here, and it was interesting to see them compare notes and find out what the other is doing and how, for example do they support IOMMU? x2apic? Etc.
New ARM architecture support work was presented by Marc Zyngier for the KVM effort, and Stefano Stabellini for the Xen effort. Much of the work seems to be done, and patches are in a shape to be applied for the next merge window. There are a few open issues, and they were discussed as well.
We had quite a few talks for the networking session. Alex Williamson spoke about VFIO, which seemed to get mentioned a lot throughout the conference in multiple sessions. This is a new way of doing device assignment, and progress looks positive, with the kernel side already merged in 3.6, and qemu patches queued up for 1.3. Alex Graf then talked about ‘semi-assignment’, a way to do device assignment (or pci passthrough) while also getting proper migration support. The effort involved writing device emulation for each device supported, and the approach wasn’t too popular. IBM and Intel guys have been doing virtio net scalability testing, and John Fastabend spoke about some optimisations, which were generally well-received. We should expect patches and more benchmarks soon. Vivek Kashyap spoke about network overlays, and how creating a tunnel for networks for VMs can help with VM migration across networks.
We also had a session on security, by Paul Moore, who gave an overview of the various methods to secure VMs, specifically the new seccomp work.
Lastly, we had Bharata Rao talk about introducing a glusterfs backend for qemu, replacing qemu’s block drivers, which gives more flexibility in handling disk storage for VMs.
The organisers are collecting feedback, so if you were there, be sure to let them know of your experience, and what we could do better in the coming years.