Tag Archives: free software

Upgrading from Fedora 11 to Fedora 13

Having already installed (what would be) F13 on my work and personal laptops the traditional way — by installing a fresh copy (since I wanted to modify the partition layout), I tried an upgrade on my desktop.

My desktop was running Fedora11 and I moved it to Fedora13. I wanted to test how the upgrade functionality works, does it run into any errors (esp. since it’s from 11 -> 13, skipping 12 entirely), if the experience is smooth, etc.

I started out by downloading the RC compose from http://alt.fedoraproject.org/. Since all my installs are for the x86-64 architecture, I downloaded the DVD.iso. I then loopback-mounted the DVD on my laptop:

# mount -o loop /home/amit/Downloads/Fedora-13-x86_64-DVD.iso /mnt/F13

I then exported the contents of the mount via NFS; edit /etc/exports and put the following line:

/mnt/F13 172.31.10.*

This ensures the mount is only available to users on my local network.

Then, ensure the nfs services are running:

# service nfs start
# service nfslock start

On my desktop which was to be upgraded, I mounted the NFS export:

# mount -t nfs 172.31.1.12:/mnt/F13 /mnt

And copied the kernel and initrd images to boot into:

# cp /mnt/isolinux/vmlinuz /boot
# cp /mnt/isolinux/initrd.img /boot

Then update the grub config with this new kernel that we’ll boot into for the upgrade. Edit /boot/grub.conf and add:

title Fedora 13 install
    root (hd0,0)
    kernel /vmlinuz
    initrd /initrd.img

Once that’s done, reboot and select the entry we just put in the grub.conf file. The install process starts and asks where the files are located for the install. Select NFS and provide the details: Server 172.31.1.12 and directory /mnt/F13.

The first surprise for me was to see the updated graphics for the Anaconda installer. They got changed in the time I installed F13 (beta) on my laptops. The new artwork certainly looks very good and smooth. More white, less blue is a departure from the usual Fedora artwork, but it does look nice.

I then proceeded to select ‘upgrade’, it found my old F11 install and everything after that ‘just worked’. I was skeptical about this while it was running: I had some rpmfusion.org repositories enabled and some packages installed from those repositories. I was wondering if those packages would be upgraded as well, or would they be left at the current state, which could create dependency problems, or if they would be completely removed. I had to wait for the install to finish, which took a while. The post-install process took more than half an hour, and when it was done, I selected ‘Reboot’. Half-expecting something to have broken or to not work, I logged in, and voila, I was presented the shiny new GNOME 2.30 desktop. The temporary install kernel that I had put in as the default boot kernel was also removed. Small thing in itself, but great for usability.

Everything looked and felt right, no sign of breakage, no error messages, no warnings, just some good seamless upgrade.

I can’t say really expected this. Coming from a die-hard Debian fan, distribution upgrades are something that was the forte of just Debian. For now. The Fedora developers have done a really good job of getting this process extremely easy to use and extremely reliable. Kudos to them!

While the Fedora 13 release has been pushed back a week for a install-over-NFS bug, it needs a certain combination of misfortunes to trigger, and luckily, I didn’t hit that bug. However, when trying the F13 beta install on my laptop, I had hit a couple of Anaconda bugs, one of which is now resolved for F14 (crash when upgrading without a bootloader configuration) and the other one (no UI refresh if I switch between virtual consoles until a package finishes install — really felt while installing over a slow network link) is a known problem with the design of Anaconda, and hopefully the devs get to it.

Overall, a really nice experience and I can now comfortably say Fedora has really rocketed ahead (all puns intended) since the old times when even installing packages used to be a nightmare. This is good progress indeed, and I’m glad to note that the future of the Linux desktop is in very good hands.

Cheers to the entire team!

Virtualisation (on Fedora)

A few volunteers from India associated with the Fedora Project wrote articles for Linux For You‘s March 2010 Virtualisation Special. Those articles, and a few others, are put up on the Fedora wiki space at Magazine Articles on Virtualization. Thanks to LFY for letting us upload the pdfs!

We’re always looking for more content, in the form of how-tos, articles, experiences, tips, etc., so feel free to upload content to the wiki or blog about it.

We also have contact with some magazine publishers so if you’re interested in writing for online or print magazines, let the marketing folks know!

Debian moving to time-based releases

http://www.debian.org/News/2009/20090729

I have used Debian since several years now and have always been either on the ‘testing’ or the ‘sid’ releases on my desktops / laptops. I never felt the need to switch to ‘stable’ as even sid was stable enough for me for my regular usage (with a few scripts to keep out buggy new debs).

I’ve seen, over time, people move to Ubuntu though. That means people really like Debian but they also wanted ‘stable’ releases at predictable times. If one stayed on a Debian stable release, ‘bleeding edge’ or ‘new software’ was never possible. When a new Debian release would be out, upstreams would’ve moved one or two major releases ahead.

So Ubuntu captured the desktop share away from Debian. The server folks wouldn’t complain for lack of new features. So would this really make any difference?

Will the folks who migrated to Ubuntu go back to Debian?

(I’ve since moved majority of my machines to Fedora though — but that’s a different topic)

We open if we die

I wrote a few comments about introducing “guarantees” in software — how do you assure your customers that they won’t be left in the lurch if you go down. It generated a healthy discussion and that gave me an opportunity to fine-tune the definition of “insurance” in software. Openness is such an advantage to foster great discussions and free dialogue.

So reading this piece of news this morning via phoronix about a company called pogoplug has me really excited. I’d feel vindicated if they could increase their customer base by that announcement. I hope they don’t go down; but I’d also like to see them go open regardless of their financial health; if an idea is out in the market, there’ll be people copying it and implementing it in different ways anyway. If, instead, they open up their code right away, they can engage a much wider community in enhancing their software and prevent variants from springing up which might even offer competing features.

Re-comparing file systems

The previous attempt at comparing file systems based on the ability to allocate large files and zero them met with some interesting feedback. I was asked why I didn’t add reiserfs to the tests and also if I could test with larger files.

The test itself had a few problems, making the results unfair:

- I had different partitions for different file systems. So the hard drive geometry and seek times would play a part in the test results

- One can never be sure that the data that was requested to be written to the hard disk was actually written unless one unmounts the partition

- Other data that was in the cache before starting the test could be in the process of being written out to the disk and that could also interfere with the results

All these have been addressed in the newer results.

There are a few more goodies too:
- gnuplot script to ease the charting of data
- A script to automate testing of on various file systems
- A big bug fixed that affected the results for the chunk-writing cases (4k and 8k): this existed right from the time I first wrote the test and was the result of using the wrong parameter for calculating chunk size. This was spotted by Mike Galbraith on lkml.

Browse the sources here

or git-clone them by

git clone git://git.fedorapeople.org/~amitshah/alloc-perf.git

So in addition to ext3, ext4, xfs and btrfs, I’ve added ext2, reiserfs and expanded the ext3 test to cover the three journalling modes: data, writeback and guarded. guarded is the new mode that’s being proposed (it’s not yet in the Linux kernel). It’s to have the speed of writeback and the consistency of ordered.

I’ve also run these tests twice, once with a user logged in and a full desktop on. This is to measure the times that a user will see when actually working on the system and some app tries allocating files.

I also ran the tests in single mode so that there are no background services running and the effect of other processes on the tests is not seen. This is done to see the timing. The fragmentation will of course remain more or less the same; that’s not a property of system load.

It’s also important to note that I created this test suite to mainly find out how fragmented the files are when allocating them using different methods on different file systems. The comparison of performance is a side-effect. This test is also not useful for any kind of stress-testing file systems. There are other suites that do a good job of it.

That said, the results suggest that btrfs, xfs and ext4 are the best when it comes to keeping fragments at the lowest. Reiserfs really looks bad in these tests.Time-wise, the file systems that support the fallocate() syscall perform the best, using almost no time in allocating files of any size. ext4, xfs and btrfs support this syscall.

On to the tests. I created a 4GiB file for each test. The tests are: posix_fallocate(), mmap+memset, writing 4k-sized chunks and writing 8k-sized chunks. These tests are repeated inside the same partition sized 20GiB. The script reformats the partition for the appropriate fs before the run.

The results:

The first 4 columns show the times (in seconds) and the last four columns show the fragments resulting from the corresponding test.

The results, in text form, are:

# 4GiB file
# Desktop on
filesystem posix-fallocate mmap chunk-4096 chunk-8192 posix-fallocate mmap chunk-4096 chunk-8192
ext2 73 96 77 80 34 39 39 36
ext3-writeback 89 104 89 93 34 36 37 37
ext3-ordered 87 98 89 92 34 35 37 36
ext3-guarded 89 102 90 93 34 35 36 36
ext4 0 84 74 79 1 10 9 7
xfs 0 81 75 81 1 2 2 2
reiserfs 85 86 89 93 938 35 953 956
btrfs 0 85 79 82 1 1 1 1

# 4GiB file
# Single
filesystem posix-fallocate mmap chunk-4096 chunk-8192 posix-fallocate mmap chunk-4096 chunk-8192
ext2 71 85 73 77 33 37 35 36
ext3-writeback 84 91 86 90 34 35 37 36
ext3-ordered 85 85 87 91 34 34 37 36
ext3-guarded 84 85 86 90 34 34 38 37
ext4 0 74 72 76 1 10 9 7
xfs 0 72 73 77 1 2 2 2
reiserfs 83 75 86 91 938 35 953 956
btrfs 0 74 76 80 1 1 1 1

[Sorry; couldn't find an option to make this look proper]

Fig. 1, number of fragments. reiserfs performs really bad here.

Fig. 2. The same results, but without reiserfs.
Fig. 3, time results, with desktop on

Fig. 4. Time results, without desktop — in single user mode.

So in conclusion, as noted above, btrfs, xfs and ext4 are the best when it comes to keeping fragments at the lowest. Reiserfs really looks bad in these tests. Time-wise, the file systems that support the fallocate() syscall perform the best, using almost no time in allocating files of any size. ext4, xfs and btrfs support this syscall.

Comparison of File Systems And Speeding Up Applications

Update: I’ve done a newer article on this subject at http://log.amitshah.net/2009/04/re-comparing-file-systems.html that removes some of the deficiencies in the tests mentioned here and has newer, more accurate results along with some new file systems.

How should one allocate disk space for a file for later writing? ftruncate() (or lseek() followed by write()) create sparse files, not what is needed. A traditional way is to write zeroes to the file till it reaches the desired file size. Doing things this way has a few drawbacks:

  • Slow, as small chunks are written one at a time by the write() syscall
  • Lots of fragmentation

posix_fallocate() is a library call that handles the chunking of writes in one batch; the application need not have to code his/her own block-by-block writes. But this still is in the userspace.

Linux 2.6.23 introduced the fallocate() system call. The allocation is then moved to kernel space and hence is faster. New file systems that support extents make this call very fast indeed: a single extent is to be marked as being allocated on disk (as traditionally blocks were being marked as ‘used’). Fragmentation too is reduced as file systems will now keep track of extents, instead of smaller blocks.

posix_fallocate() will internally use fallocate() if the syscall exists in the running kernel.

So I thought it would be a good idea to make libvirt use posix_fallocate() so that systems with the newer file systems will directly benefit when allocating disk space for virtual machines. I wasn’t sure of what method libvirt already used to allocate the space. I found out that it allocated blocks in 4KiB sized chunks.

So I sent a patch to the libvir-list to convert to posix_fallocate() and danpb asked me about what the benefits of this approach were and also asked about using alternative approaches if not writing in 4K chunks. I didn’t have any data to back up my claims of “this approach will be fast and will result in less fragmentation, which is desirable”. So I set out to do some benchmarking. To do that, though, I first had to make some empty disk space to create a few file systems of sufficiently large sizes. Hunting for a test machine with spare disk space proved futie, so I went about resizing my ext3 partition and creating about 15 GB of free disk space. I intended to test ext3, ext4, xfs and btrfs. I could use my existing ext3 partition for the testing, but that would not give honest results about the fragmentation (existing file systems may already be fragmented, causing big new files surely to be fragmented whereas on a fresh fs, I won’t run into that risk).

Though even creating separate partitions on rotating storage and testing file system performance won’t give perfectly honest results, I figured if the percentage difference in the results was quite high, that won’t matter. I grabbed the latest Linus tree and the latest dev trees for the userspace utilities for all the file systems and created about 5GB partitions for each fs.

I then wrote a program that created a file, allocated disk space and closed it and calculate the time taken in doing so. This was done multiple times for different allocation methods: posix_fallocate(), mmap() + memset() and writing zeroes in 4096 byte chunks and 8192 byte chunks.

So I had four methods of allocating files and 5G partition size. So I decided to check the performance by creating 1GiB file size for each allocation method.

The program is here. The results, here. The git tree is here.

I was quite surprised seeing poor performance for posix_fallocate() on ext4. On digging a bit, I realised mkfs.ext4 didn’t create it with extents enabled. I reformatted the partition, but that data was valuable to have as well. Shows how much a file system is better with extents support.

Graphically, it looks like this:
Notice that ext4, xfs and btrfs take only a few microseconds to complete posix_fallocate().

The number of fragments created:

btrfs doesn’t yet have the ioctl implemented for calculating fragments.

The results are very impressive and the final patches to libvirt were finalised pretty quickly. They’re now in the development branch libvirt. Coming soon to a virtual machine management application near you.

Use of posix_fallocate() will be beneficial to programs that know in advance the size of the file being created, like torrent clients, ftp clients, browsers, download managers, etc. It won’t be beneficial in the speed sense, as data is only written when it’s downloaded, but it’s beneficial in the as-less-fragmentation-as-possible sense.

Startups in 14 sentences

Paul Graham has an article on the top 13 things to keep in mind for entrepreneurs. I have one to add (for software startups):

- Going open source can help
You might have a brilliant idea and a cool new product. It mostly will be disruptive technology. You might think of changing the world. But people might have to modify the way they were doing things. What if you run out of funds midway or some other unforeseen event by which your company has to shut shop? Customers will be vary of deploying solutions from startups for fears of them going down. If the customers are given access to the source code, they’re at least insured they can have control over the software if your company is unable to support it. And letting them know this can win some additional customers — who knows!

The Art of Convincing and the Importance of Freedom

A kid walks to her father. She wants a chocolate. She knows the father can’t refuse, but mom has tighter control over whether she can really have it. The kid is smart. She tells her dad “mom says I can have it if you agree”. The father says “OK”. Then she goes to mom and says “dad thinks I can have chocolate. Give me one.”

What’s smart about this is that the kid knows the opposition well. Microsoft seems to know it as well. OOXML, the format they’re proposing to be an ISO format for storing documents, needs support from the industry and countries for it to be a standard. Nothing’s wrong with that. But the problem is they don’t want to reveal all the specifications of storing files in their format. Which basically means they continue to have a monopoly and tight control over your documents.

Let’s say you’ve bought MS Word or MS Office in 1998 and are happy with it. It still works. All your documents are stored on your hard disk. Now you decide to upgrade your computer and with it, all your software. You purchase the newest version of MS Office. You open your old document. It doesn’t open. You try another one. Same result. You think something’s gone wrong with your backup. You blame the computer vendor who gave you the new machine and promised to restore your old data. The problem, however, is not caused by the vendor. It’s caused by Microsoft. Over the years, they decided to change the file formats and not support the documents which were created by older versions of their software. So now you’re left with unusable copies of your documents because there is no support available for you to import the data to the new format.

Why can this happen? Because Microsoft didn’t want to share the details on how they store your information with others. We saw why this is bad. But if they were to share the details, won’t your documents be insecure? Won’t others be able to see what you have? Well, no. As long as there are people who have the same software that you have, they’ll be able to open your documents.

Also consider this: you don’t want to purchase the expensive software from Microsoft to store your documents. You use free software available (free as in freedom, not price) to store your documents. But someone sends you a document in a proprietary format. How do you access the information present in it? Since Microsoft doesn’t share details as to how it stores information, you won’t be able to access it. You don’t want to buy a few thousands Rupees worth of software only because some other people use it.

So isn’t Microsoft’s proposal to make its file format a positive step? In a way, yes. Because it proves that opening up of that information does not in itself constitute insecurity. If you want your documents to be safe, password-protect them and take precautions to not expose them to suspicious people.

But that’s it. It’s not a positive step for the simple reason that they don’t want to publish the entire file format. What they’re proposing is a mini-skirt. Show a little, hide a lot.

India just voted against making OOXML an ISO standard. This is a very positive move. We’re not encouraging bad practices and we want interoperable standards. The rival format, ODF (Open Document Format), already has two office suites supporting it and using as the native file format (OpenOffice.Org and KOffice). Everything is open and interoperability is guaranteed. No one has to buy anything from anyone to open a file stored in this format. Just download a copy of either of the office suites and you’re ready to go.

Thanks to the efforts of everyone involved in rejecting the OOXML standard.

Foss.in

Foss.in/2007 is over and I’m back home. The slide deck on my kvm talk is now available.

This was the first time I went to foss.in and I really liked the experience. More than the talks, it’s the corridor discussions and meeting up with people that’s really the most interesting part. The place was full with people who have contributed immensely to the software I use everyday, and I couldn’t let go of such an opportunity to go and thank them personally. I definitely missed thanking everyone, so I think I’ll go there next year to make up for that. Danese Cooper gets my vote for the best talk: Trekking with White Elephants. It’s a great way to learn how to go about contributing to open source and years of experience in getting the management knowledgeable about free software. I’ve learnt these lessons myself through all these years and I’m sure young people out there will benefit a lot from these tips. (I will update the link once I get access to the final slides)

My talk on KVM turned into a demo session for KVM and explaining merits of the approach as opposed to Xen, as a few people in the audience had already used Xen and they wanted to know why KVM is different or better. Too bad, since I was hoping there would be contributors who would have liked to know how KVM actually works.

I wasn’t also too happy with the scheduling of the talks: there was a gcc talk in parallel with a kernel talk and a filesystem / distributed computing talk in parallel with another kernel talk. To make matters worse, Thomas Gleixner’s talk on the RT patches was added later in the same time slot as I was to speak.

Rusty has to be the most entertaining kernel hacker; in his inimitable style, he provided a grand finale to the event meant to encourage contributors to the FOSS community. He got me up on stage along with James Morris to speak about how we got involved with FOSS and the kernel.

The Linux kernel folks at IBM LTC Bangalore swore they wouldn’t let me go away easily and asked me to visit their office where people would ask me all sorts of questions on KVM. That was a very nice session that I had; they’re mostly interested in the power management and migration issues on kvm, and that got me pretty kicked, as I’m extremely interested in the power management and Green issues of late. Though I couldn’t answer most of the questions related to power management, I’m sure the kvm-devel list can help.

Moreover, quite a few people came up to me and asked about my work on the kernel and kvm and that was quite encouraging.

I’m sure I also caused inconvenience to people at the sponsor stalls asking them in what ways their company contributed to foss software. Most of them were there just to attract talent. I’m hoping the FOSS enthusiasts don’t stop contributing once they’re in those big companies.