For some reason, this one is hard to keep in my head. Sometimes it is necessary to create a “large file” to act as a disk while monkeying with virtualization. If I need, say, 10GB of disk space, do I really want to allocate all 10GB, or just what is needed – up to 10GB? Sparse files let you do this.
$ pwd /home/training $ dd if=/dev/zero of=sparse-file bs=1 count=0 seek=10G $ ls -al sparse-file -rw-r--r-- 1 training admin 10737418240 Apr 11 08:00 sparse-file
Whoa, you say. It does take up 10 GB! Appearances can be deceiving, especially in tech work. To see how big a file really is, we’ll need to take a look at the size of the file, in blocks, not bytes. This is accomplished through the “-s” option.
$ ls -als sparse-file 0 -rw-r--r-- 1 training admin 10737418240 Apr 11 08:00 sparse-file
Zero blocks. That’s better. Just to show that this is real, let’s switch to root and format the sparse file with an ext3 filesystem:
$ su - Password: # mke2fs -j $PWD/sparse-file mke2fs 1.39 (29-May-2006) /home/training/sparse-file is not a block special device. Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 1310720 inodes, 2621440 blocks 131072 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2684354560 80 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. # ls -als /home/training/sparse-file 299216 -rw-r--r-- 1 training-admin 10737418240 Apr 11 08:06 /home/lukeh/T/sparse-file
So now we’re up to almost 300MB of usage. Just for fun, let’s mount the file system and add 1 GB of kernel randomness, and check out usage one more time.
# mount -o loop /home/training/sparse-file /mnt # time dd if=/dev/urandom of=/mnt/random.bin bs=1G count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 205.06 seconds, 5.2 MB/s real 3m25.359s user 0m0.001s sys 3m5.679s # umount /mnt # exit $ ls -als sparse-file 1349792 -rw-r--r-- 1 training admin 10737418240 Apr 11 08:22 sparse-file
Well, I am a little embarrassed here – 3 minutes 25 seconds to create 1 GB of randomness from the kernel? It shows what kind of hardware I am using to author this.
Finally, throw the -h flag to ls to see “human readable” output:
$ ls -alsh sparse-file 1.3G -rw-r--r-- 1 training admin 10G Apr 11 08:22 sparse-file