XBinary: Extended Binary Format Support for Mac OS X

January 20th, 2009

XBinary is software that lets you add kernel-level support for executing arbitrary binary formats on Mac OS X. To read more about it and to download it, visit the XBinary page.

AncientFS on Linux and FreeBSD

December 22nd, 2008

By popular demand, I’ve "ported" AncientFS to Linux and FreeBSD. It was reasonably straightforward. Largely on purpose, AncientFS depends on the cross-platform interfaces of MacFUSE for the most part.

Most people don’t realize that MacFUSE is much more than a "Mac OS X implementation of the FUSE API." Of course, the name "MacFUSE" doesn’t help much in that regard.

You should now be able to build the AncientFS source tree on Mac OS X, Linux, and FreeBSD. You must have the FUSE implementation for your platform installed to build and use AncientFS. Then, it should take a single make command to build.

$ svn co http://macfuse.googlecode.com/svn/trunk/filesystems/unixfs
$ cd unixfs/ancientfs
$ make # GNU make required. Use gmake on FreeBSD.
...

If it doesn’t build out-of-the-box on your system, you might want to tweak platform-specific settings in unixfs/ancientfs/Makefile first. The default settings assume that fuse is installed under /usr on Linux and under /usr/local on FreeBSD.

Because libfuse wants to use sem_init(), you will need to ensure on FreeBSD that the POSIX semaphore implementation is either statically compiled into the kernel (not the default on recent FreeBSD systems) or the sem kernel module is loaded. See sem(4) for details.

I’m traveling and don’t have good/much access to Linux or FreeBSD machines. Therefore, I haven’t been able to test this much. I do know that it builds and mounts some things on at least Linux 2.6.18 (Ubuntu) and FreeBSD 7.1-BETA2.

Extended Notes on AncientFS

December 17th, 2008

Here are some extended notes on understanding, compiling, and using AncientFS, the file system I talked about in the previous blog post.

More User-Space File System Goodies

December 16th, 2008

There has been much buzz about new features and functionality in MacFUSE 2.0. Besides the MacFUSE 2.0 release, there are still more new and interesting things to be discussed in the realm of user-space file systems. As I described and briefly demonstrated during my recent talk at the Googleplex, I wrote several new user-space file systems to "celebrate" two years of MacFUSE.

Now that the talk’s video is available. I’ve written some notes on the new file systems themselves. There is a lot of new information and new code for those interested in file systems, Mac OS X, and operating systems in general. Much of it is academically and practically useful, and some could have significant potential down the road. You could also think of it as a new year gift to the Mac OS X and open-source communities.

This is a meta note on how you can go about discovering and understanding the what, how, and why of everything I’m talking about.

1. MacFUSE State of the Union Talk (2008) Video

If you were not at the talk, watching the video would provide useful context and should help greatly in understanding what follows.

2. AncientFS

Next, you could look at AncientFS, which I introduced, among other things, during the talk. AncientFS lets you mount ancient (and in some cases, current-day) "data containers" as regular volumes on Mac OS X. It supports the following formats.

  • v1tap—DECtape tap tape archive; UNIX V1
  • v2tap—DECtape tap tape archive; UNIX V2
  • v3tap—DECtape tap tape archive; UNIX V3
  • ntap—DECtape/magtape tap tape archive; 1970 epoch
  • tp—DECtape/magtape tp tape archive
  • itp—UNIX itp tape archive
  • dtp—UNIX dtp tape archive
  • dump—Incremental file system dump (512-byte blocks, V7/bsd)
  • dump1kI—Incremental file system dump (1024-byte blocks, V7/bsd)
  • dump-vn—Incremental file system dump (512-byte blocks, bsd-vn)
  • dump1k-vn—Incremental file system dump (1024-byte blocks, bsd-vn)
  • v1ar—Very old (0177555) archive (.a) from First Edition UNIX
  • v2ar—Very old (0177555) archive (.a) from Second Edition UNIX
  • v3ar—Very old (0177555) archive (.a) from Third Edition UNIX
  • ar—Current (!<arch>\n), old (0177545), or very old (0177555) archive (.a); use (v1|v2|v3)ar for UNIX V1/V2/V3 archives
  • bcpio—Binary cpio archive (old); may be byte-swapped
  • cpio_odc—ASCII (odc) cpio archive
  • cpio_newc—New ASCII (newc) cpio archive
  • cpio_newcrc—New ASCII (newc) cpio archive with checksum
  • tar—ustar, pre-POSIX ustar, or V7 tar archive
  • v1—First Edition UNIX file system
  • v2—Second Edition UNIX file system
  • v3—Third Edition UNIX file system
  • v4—Fourth Edition UNIX file system
  • v5—Fifth Edition UNIX file system
  • v6—Sixth Edition UNIX file system
  • v7—Seventh Edition UNIX file system
  • v10—Tenth Edition UNIX file system
  • 32v—UNIX/32V file system
  • bsd—BSD file system (V7-style with fixed-length file names; e.g. 2.9BSD or 4.0BSD)
  • bsd-vn—BSD file system (pre fast-file-system “UFS” with variable-length file names; e.g. 2.11BSD for PDP-11)

To learn more about AncientFS and how to use it, please read the AncientFS article. Enjoy seeing ancient data seamlessly in the modern namespaces of Mac OS X!

3. The UnixFS Layer

AncientFS also led to UnixFS, a general-purpose abstraction layer that proved useful in getting several other "alien" file systems up and running on Mac OS X. It is particularly useful for "Unix-style" file systems, where you need concepts such as on-disk and in-memory superblocks and inodes. As I mentioned during the talk, I used UnixFS in conjunction with ad-hoc "Linux emulation" to bring support for the UFS, System V, and Minix file system families on Mac OS X. That amounts to a large number of new file systems, but it was easier than it sounds because the idea was to take existing Linux kernel implementations of these file system families and make them work in user space on Mac OS X!

UnixFS is currently rather "beta" and not a formal API by any means. Since it is a programming interface and doesn’t do anything by itself, it will be of interest only to developers at this point. In future, it may also evolve into a "LinuxFS" layer that could make it even easier and faster to systematically make Linux kernel-based file systems work on other platforms with very few code changes. For now, you can browse some bits of code.

You can, however, use the aforementioned UnixFS-based file system families: UFS, System V, and Minix. To do so, check out the relevant part of the MacFUSE source tree and compile one or more (or all) of the UnixFS-based file systems—it is quite straightforward: a single make in the filesystems/unixfs/ subdirectory in the MacFUSE source tree should build all of them. (You must have MacFUSE installed, of course.)

$ svn co http://macfuse.googlecode.com/svn/trunk/filesystems/unixfs
$ cd unixfs
$ ls -F
Makefile	common/		sysvfs/
ancientfs/	minixfs/	ufs/
$ make
...
$

4. The UFS Family

This is a user-space implementation (read-only) of the UFS file system family. Most of the UFS-specific code comes from the Linux kernel and is largely unchanged. Specific UFS flavors supported are as follows.

  • old—the oldest UFS format
  • sun—used in SunOS/Solaris
  • sunx86—used in the x86 versions of SunOS/Solaris
  • hp—used in HP-UX
  • nextstep—used in NEXTSTEP
  • nextstep-cd—used in NEXTSTEP CDROMs
  • openstep—used in OPENSTEP
  • 44bsd—used in FreeBSD, NetBSD, OpenBSD, and Mac OS X
  • ufs2—used in FreeBSD 5.x

5. The System V Family

This is a user-space implementation (read-only) of the System V file system family. Most of the sysvfs-specific code comes from the Linux kernel and is largely unchanged. Specific sysvfs flavors supported are as follows.

  • svr2—used in SVR2
  • svr4—used in SVR4
  • xenix—used in Xenix
  • coherent—used in Coherent Unix

6. The Minix Family

This is a user-space implementation (read-only) of the Minix file system family. Most of the minixfs-specific code comes from the Linux kernel and is largely unchanged.

The following image summarizes the new file system capabilities Mac OS X gets as a result of these exercises.



7. The "One More Thing" Thing

Although it isn’t directly file system related, the last thing I demonstrated during the talk was the ability to run ancient Unix (PDP-11) binaries seamlessly on Mac OS X. PDP-11 aside, the techniques used to do so are generally useful for research and experimentation because unlike Linux, Mac OS X does not allow developers to extend the set of binary formats that can be "natively" executed by the kernel. The demo shows the Fifth Edition Unix kernel being compiled on Mac OS X using the original C compiler toolchain from a Fifth Edition disk image mounted using AncientFS. Additionally, there’s an "authentic" reproduction of the following ominous error message by running the original mv executable from Sixth Edition Unix.

	values of β will give rise to dom!

For more details, please watch the last section of the talk video. This is very preliminary work for which no source code or binaries are available yet.

A Note on Automounting MacFUSE File Systems

December 11th, 2008

Mac OS X, like many other Unix-like operating systems, includes the “autofs” file system layer that make automatic on-demand mounting of remote resources possible. See the man page for automount(8) for more details.

Such automatic mounting is orthogonal to and possible with MacFUSE. (NB: You will need MacFUSE 2.0 or above for this to work properly since older versions of MacFUSE filter out the “automounted” mount-time argument.) Consider sshfs, a user-space SFTP file system implementation that works with MacFUSE. The following is a quick-and-dirty example of how you could set up an autofs mount for sshfs. (There are other ways to set up autofs mounts.)

Create an /etc/fstab file (or add to an existing one) with the following entry. We will create what’s called a “static map” in autofs parlance.

$ cat /etc/fstab
dummy:user@host:/remotedir /Network/name sshfs volname=volname,allow_other 0 0

You will have to customize the green highlighted components. user, host, and remotedir are the SFTP user name, the SFTP server host name, and the remote directory on the SFTP server, respectively. You can choose some reasonable value for name and volname. The local mount point will be /Network/name.

Next, to keep things simple, configure key-based authentication to the SFTP server so you can log in without having to type your password.

The keyword “sshfs” in the /etc/fstab entry is the type of the file system. Given a file system type foo, the automounter will expect a mount_foo file-system-specific mounting program to exist. In our case, we don’t have a separate mounting program for sshfs. However, because of the format of the entry and how the automounter passes arguments to the mounting program, it will work if you simply copy the command-line sshfs program to /sbin/mount_sshfs. Alternatively, you can create a symbolic link as follows.

$ which sshfs
$ /usr/local/bin/sshfs
$ sudo ln -s /usr/local/bin/sshfs /sbin/mount_sshfs

That should be it. Run the automount program to update the state of things. The -c argument tells the automount daemon to flush any cached information.

$ sudo automount -c

If everything went well, the new mount should appear in the output of the mount command. In the following example, we used SSH as the name component of the mount point.

$ mount
...
map -static on /Network/SSH (autofs, automounted)

Now, if you simply access /Network/SSH, the SFTP file system should be automatically mounted.

$ ls /Network/SSH
Applications		Volumes			work
Desktop DB		bin			private
...

If there is an error in mounting (say, the remote host is not reachable), you will not be permitted to access the /Network/SSH directory.

$ ls /Network/SSH
ls: SSH: Operation not permitted

You can specify a timeout period after which an automounted file system will be unmounted if it has not been accessed within that period. Either use the -t argument of automount or see the /etc/autofs.conf file.

VeryBigFS: All You Can Read

December 10th, 2008

VeryBigFS is a trivial MacFUSE file system—about 60 lines of C code—that creates a huge volume with a huge file in it. “Huge” would be 512TB in this case.

This is useful if you want to see how a program will deal with unusually large files. Since it is extremely unlikely for the majority of us to be able to actually create files that are 512TB in size any time in the near future, faking is the way to go. Assuming you have MacFUSE installed, here is how you can try this file system.

$ ls
Makefile	verybigfs.c
$ make
...
$ mkdir "/Volumes/Very Big HD"
$ ./verybigfs "/Volumes/Very Big HD"
$ ls -lh
...
-r--r--r--  1 singh  wheel   512T Nov 28 05:38 copyme.txt

As an aside, note that HFS+ does not support sparse files. In fact, if you create a large scratch file on HFS+, it will be zero filled “soon”. This can take a long time depending upon the file size, the hardware, and the resources available. Mac OS X provides a way for a privileged process to set a file’s size without zero filling: it’s the F_SETSIZE command of the fcntl() system call.

Here is what the Finder tells you about the file copyme.txt inside the volume.



Note that although the tantalizingly named file is fictitious in that it doesn’t occupy any real disk space, you will read 512TB of zero-filled data if you choose to do so.

$ od -Xv copyme.txt
...
1357500          00000000        00000000        00000000        00000000
1357520          00000000        00000000        00000000        00000000
1357540          00000000        00000000        00000000        00000000
1357560          00000000        00000000        00000000        00000000
1357600          00000000        00000000        00000000        00000000
...
# A very, very, very long undetermined length of time...

MacFUSE 2.0 is Here!

December 8th, 2008

It was a little over two years ago that I gave serious thought to making user-space file systems a reality on Mac OS X. The result of that work, MacFUSE, was introduced at the Macworld conference in January 2007. Since then, MacFUSE has come a long way. It’s been used in projects big and small and has made numerous existing (on other platforms) and new file systems possible on Mac OS X.

MacFUSE is a native file system for Mac OS X—”native” means that it lives in the kernel, like HFS+ and AFP. However, MacFUSE doesn’t ultimately provide the file system content itself—it communicates with a standard Mac OS X application to read or write the actual file system content. Thus, MacFUSE is a file system that lets you write file systems. To do so, a developer would use one of the APIs provided by MacFUSE. MacFUSE 2.0 provides multiple APIs including, but not limited to, the FUSE API from Linux.

MacFUSE 2.0 is a major update to MacFUSE. We’ll be discussing what’s new in MacFUSE 2.0 in a talk at Google’s Mountain View headquarters today. For those who can’t be at the talk, there are two versions of the “what’s new” description.

The Apple-style version is easy to state: “improvements and bug-fixes.”

Alternatively, you can read the CHANGELOG in its entirety at the project’s web site.

MacFUSE Talk at Google

December 2nd, 2008

Next Monday (December 8, 2008), there will be an open-to-all talk on MacFUSE at Google’s Mountain View headquarters. Here is a more detailed announcement.

A Note on Pathname Processing in HFSDebug

November 24th, 2008

A couple of weeks ago, I released HFSDebug 4. I’ve updated it to make HFSDebug’s pathname processing a little more sophisticated. Depending on how (and how seriously) you use HFSDebug, knowing the details could be useful.

When you specify a file system object to HFSDebug using a pathname, how HFSDebug will treat the pathname usually depends on other arguments, or the lack of other arguments.

Something to Read and Forget: The “Legacy” Mode

A typical invocation is quite simple: you simply give HFSDebug a path. If the path exists on an HFS+ volume, HFSDebug will use the underlying volume as the one to operate upon.

$ sudo hfsdebug /mach_kernel
...

In this case, HFSDebug will begin by doing a stat(2) call on the path. (As you will see shortly, this and the entire “legacy mode” is optional—you can make HFSDebug do even this part “from scratch” and not use stat(2) or other file system calls on the volume.) Since the goal is to make it possible to examine file system’s internal structure as opposed to what the user “sees” through layers of interfaces, it matters whether the object in question is a file hard link, directory hard link, symbolic link, etc. Specifically, we need to get at the link reference that the pathname represents—not the target it resolves to. For symbolic links, we can use lstat(2) to give us the node ID of the reference. For file and directory hard links, we have to use something else.

How do we know if something is a hard link? HFSDebug examines the st_nlink (link count) and st_mode fields of the resultant stat structure.

In the case of regular files, a link count of 2 or higher means the object is a “known” hard link.

In the case of directories, what “link count” means is quite context-sensitive on HFS+. The stat structure’s st_nlink for a directory normally represents the directory’s item count. The directory itself (the "." entry) and its parent (the ".." entry) together add 2 to the count. Thus, if you have a directory with, say, 4 files and 4 subdirectories in it, a stat(2) call would report an st_nlink value of 10. However, if the folder count bit is enabled on the mount, the meaning of st_nlink changes: it then represents a count of only the subdirectories. In the aforementioned example, the link count would be 6 instead of 10. The folder count bit is currently only enabled for case-sensitive (HFSX) volumes. Besides the stat structure’s st_nlink field, HFS+ can separately provide the children count and the real hard link count for directories. The getattrlist(2) call can be used to retrieve these: they are the ATTR_DIR_ENTRYCOUNT and ATTR_DIR_LINKCOUNT directory attributes, respectively. Once we do know a directory’s “real” hard link count, again, a value of 2 or higher means it is a “known” hard link.

By a “known” file or directory hard link, I mean that we know that it currently is a hard link. That means the object we are looking at is a link reference and the “visible” node ID isn’t that of the reference but is that of its target. (See my previous post for more information.) In this case, HFSDebug will retrieve the object’s parent folder’s node ID and use it in conjunction with the object’s name to do a “from scratch” lookup of the object’s node ID. (This is a traditional { parent_nodeid, name } ==> nodeid lookup.)

However, things are not simpler if the link count of the object is 1. That’s because it could have had a higher link count in the past and the other links were deleted. Again, as I described earlier, the on-disk object continues to be a reference to the real content that lives in a special hidden HFS+ folder. If that’s the case though, the hidden folder will have a file or a folder whose name is formed from the object’s visible node ID: iNode%d or dir_%d for files and folders, respectively. If no such file or folder exists, as HFSDebug can look up from scratch, the object is not a current or past hard link.

If this sounds unnecessarily complex, well, some of it is. Until version 4, HFSDebug did not have the ability to process complete pathnames from scratch. With version 4, you can do things like the following on both mounted and unmounted volumes alike.

$ sudo hfsdebug -d /dev/disk0s2 /mach_kernel
...

Simpler, Better

In the new mode, HFSDebug will no longer do a stat(2)/lstat(2) or involve the file system otherwise. Obviously, if it has to support unmounted volumes, that’s how it has to be. It will take the pathname and process it component-by-component, which is easier to conceptualize than the “legacy” mode I described above. (Well, to be fair to the legacy mode, it became uglier with the advent of directory hard links.)

The following are examples of how HFSDebug will handle things based on some of its arguments.

$ sudo hfsdebug /foo/bar/baz
# legacy mode; will use stat(2)/lstat(2) to kick things off
...
$ sudo hfsdebug -d /dev/diskN /foo/bar/baz
# new mode; volume can be mounted or unmounted
...
$ sudo hfsdebug -P /foo/bar/baz
# new mode; uses root volume, which is obviously mounted
...
$ sudo hfsdebug -d /dev/diskN -P /foo/bar/baz
# new mode; volume can be mounted or unmounted

If you are wondering why I haven’t removed the legacy mode altogether, it’s because I want to keep it around for some time so that I can compare things while testing.

Just remember that you can use the -P argument (note that it’s the capital P) to specify a path and HFSDebug will use the new mode on both mounted and unmounted volumes. The path must be absolute.

Some Rules

Now, if HFSDebug is not involving the file system at all, it better be able to handle arbitrarily convoluted pathnames. Using things like realpath(3) is not an option since realpath(3) would want to call stat(2)/lstat(2). Say, we could have a path like the following.

/foo////././bar///../baz/../blah/..//////.././dir/are/you/../crazy/

Besides, there could be components in the path that are symbolic or hard links. Symbolic links could point to targets that have equally crazy pathnames. It’s not just a matter of canonicalizing the dots and the slashes. We have several requirements as illustrated by the following examples. (Some of these are simply HFSDebug conventions.)

  • We must ensure that all intermediate components resolve to directories. They can be actual directories, valid symbolic links to directories, or directory hard links. Remember that in the case of symbolic or hard links, the on-disk object will be a “file”—HFSDebug will need to resolve them from scratch too.
  • If there is a “..”, we must not blindly go “up” one level: we must ensure that what we are going back from is a directory. realpath(3) actually doesn’t care about this: it will canonicalize /path/to/file.txt/../file.txt to /path/to/file.txt.
  • Although we do want to resolve intermediate components that are links, we must not resolve the terminal component if it happens to be a hard link or a symbolic link. That’s because our goal is to look at what’s on disk for the given path. Besides, in the case of a link, the details shown by HFSDebug will include the full pathname to the link’s target. If we wanted further details on the target, we could run HFSDebug on it.
  • If the path has a terminal slash, HFSDebug will ensure that the component is a directory. If it happens to be a directory hard link or a symbolic link, HFSDebug will resolve it in this case. Consider an example: suppose there is a symbolic link /tmp/somesymlink. The following is what HFSDebug will do depending on the arguments and what the link points to.

# somesymlink points to a file
#
$ sudo hfsdebug -P /tmp/somesymlink
... # will show details of the link itself
$ sudo hfsdebug -P /tmp/somesymlink/
... # will complain that the link target is not a directory

# somesymlink points to a directory
#
$ sudo hfsdebug -P /tmp/somesymlink
... # will show details of the link itself
$ sudo hfsdebug -P /tmp/somesymlink/
... # will show details of the directory somesymlink points to

# somesymlink points to a non-existent target
#
$ sudo hfsdebug -P /tmp/somesymlink
... # will show details of the link itself
$ sudo hfsdebug -P /tmp/somesymlink/
... # will complain that path /tmp/somesymlink/ was not found on the volume

HFSDebug 4.0 and New HFS+ Features

November 9th, 2008

I wrote HFSDebug in early 2004. I initially made it available as a software tool to help understand fragmentation in HFS+ volumes, although it could also be used to analyze several implementation details of HFS+. Eventually, I extended HFSDebug to be able to analyze all on-disk aspects of HFS+, along with the ability to compute more types of volume statistics and to even retrieve some in-memory details of mounted HFS+ volumes.

HFSDebug has been an extremely useful tool for me. I’ve used it to help explain the workings of HFS+ in the Mac OS X Internals book, to understand occasional mysterious behavior in HFS+ volumes, to search for file system objects, to generate interesting file system statistics (top N largest files, top N fragmented files, resource forks vs data forks, contiguous free space, and so on), and to create interesting demos. (For example, to show HFS+ mechanisms such as Hot File Clustering and On-the-Fly Defragmentation at work.)

HFS+ is the preferred and default volume format on Mac OS X. Even with exciting new developments such as ZFS support in Mac OS X, I don’t expect HFS+ to become obsolete any time soon. Today’s Macintosh computers, iPods, iPhones, and AppleTV’s all use HFS+.

With most major releases of Mac OS X, HFS+ has gained new capabilities. Features such as metadata journaling, on-the-fly file defragmentation, hot file clustering, extended attributes, access control lists, hard link chains (tracking hard links), and directory hard links have come to HFS+ in recent years. There have been “news reports” of compression being an upcoming feature in HFS+.

The most interesting technical things about HFS+ are not its features, but how several of the newer features are implemented. With the goal of retaining backward compatibility, new features have often been retrofitted, or shoehorned, if you will, into HFS+. Knowing such implementation details evokes different reactions in different people, ranging from “That’s a nifty way to implement this!” to “Gross!” This is something you can decide for yourself with the help of HFSDebug, which can show you exactly how the file system works.

New Features in HFSDebug

Now, every time a new feature is added to HFS+, HFSDebug likely (but not always) needs to be updated, say, to recognize and parse a new type of on-disk object such as a directory hard link. I’m releasing a new version of HFSDebug that has the following improvements.

  • Ability to show details of directory hard links.
  • Ability to show details of hard link chains.
  • New built-in filters: atime (find files by access time), dirhardlink (list directory hard links), hardlink (list file hard links), and sxid (list setuid/setgid files).
  • Ability to do component-wise path lookup from scratch, allowing you to analyze individual file system objects by path even on unmounted HFS+ volume.
  • Support for Snow Leopard.
  • Numerous subtle improvements and some bugfixes.

Still a PowerPC binary!?

It may be surprising (or troubling) to some of you that there is still no x86 version of HFSDebug: it’s available only as a PowerPC executable. Well, there is some logic to this madness. You see, I wrote HFSDebug in the “Panther” (10.3) days. Mac OS X was PowerPC-only then. It was also big endian. That matters because HFS+ uses big endian for its on-disk structures.

HFSDebug is a complex program. It essentially reads raw data from an HFS+ disk (say, a partition on a real disk or a disk image) and recreates a read-only HFS+ file system in memory. To simplify matters, I decided to skip structure-by-structure, field-by-field endianness conversion—after all, I was only targeting the big-endian-only Mac OS X. By contrast, the xnu kernel’s HFS+ implementation does do byte swapping on x86. So does the fsck_hfs program.

As long as Rosetta exists, HFSDebug can get away with being a PowerPC executable, allowing me to defer the grunt work of swapping bytes to a later date.

Let us take the new HFSDebug features for a spin.

Hard Link Chains

Although support for file hard links has been there in HFS+ before Leopard, the new chaining feature in Leopard can keep track of hard link chains, which are doubly linked list of file IDs connecting hard links together. Hard links to a file on HFS+ are conceptually similar to those on Unix systems: They represent multiple directory entries referring to common file content. Implementation-wise, HFS+ hard links use a special hard-link file for each directory entry. The common file content is stored in a special file: the indirect-node file. All indirect-node files are stored in the private metadata folder, a special directory (/\xE2\x90\x80\xE2\x90\x80\xE2\x90\x80\xE2\x90\x80HFS+ Private Data) that’s both normally invisible to the user and has a name that’s “hard” to type. It’s much easier to understand this through HFSDebug.

We begin by creating a file called file1. Before we create a hard link to this file, we examine its details using HFSDebug. That way, we can tell if anything about the file changes after link creation.

$ mkdir /tmp/test/
$ cd /tmp/test
$ echo "This is file1" > file1
$ sudo hfsdebug file1
...
  path                 = Leopard HD:/private/tmp/test/file1
# Catalog File Record
  type                 = file
  file ID              = 1927091
  flags                = 0000000000000010
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
  fileMode             = -rw-r--r--
  linkCount            = 1
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0
  fdCreator            = 0
  fdFlags              = 0000000000000000
...

Let us now make a hard link to this file.

$ ln file1 file2

Let us see if anything has changed about file1 now that we made a hard link to it.

$ sudo hfsdebug file1
...
  path                 = Leopard HD:/private/tmp/test/file1
# Catalog File Record
  type                 = file (hard link)
  indirect node file   = Leopard HD:/%0000%0000%0000%0000HFS+ Private Data/iNode1927091
  file ID              = 1927094
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
...
  # BSD Info
  ownerID              = 1927095 (previous link ID)
  groupID              = 0 (next link ID)
  adminFlags           = 00000000
  ownerFlags           = 00000010
                       . UF_IMMUTABLE (file may not be changed)
  fileMode             = -r--r--r--
  iNodeNum             = 1927091 (link reference number)
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0x686c6e6b (hlnk)
  fdCreator            = 0x6866732b (hfs+)
  fdFlags              = 0000000100000000
                       . kHasBeenInited
...
  # Data Fork
  logicalSize          = 0 bytes
  # Resource Fork
  logicalSize          = 0 bytes

We see that a lot has changed! The on-disk nature of file1 has completely transformed. The original content has actually “moved” to an indirect-node file. What was file1 before has been replaced with a new directory entry altogether: one that has a new file ID within the file system. The new directory entry is also a file, but with several special properties. Its “type” and “creator” (as stored in the Finder Info) are hlnk and hfs+, respectively. It has been marked immutable. It has no content in either its data fork or its resource fork. Moreover, the owner and group ID on-disk fields have been repurposed to act as the previous and next links, respectively, in the hard link chain. We see that the previous link ID is 1927095. Let us use HFSDebug to show us information for that ID.

$ sudo hfsdebug -c 1927095
...
  path                 = Leopard HD:/private/tmp/test/file2
# Catalog File Record
  type                 = file (hard link)
  indirect node file   = Leopard HD:/%0000%0000%0000%0000HFS+ Private Data/iNode1927091
  file ID              = 1927095
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
...
  # BSD Info
  ownerID              = 0 (previous link ID)
  groupID              = 1927094 (next link ID)
  adminFlags           = 00000000
  ownerFlags           = 00000010
                       . UF_IMMUTABLE (file may not be changed)
  fileMode             = -r--r--r--
  iNodeNum             = 1927091 (link reference number)
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0x686c6e6b (hlnk)
  fdCreator            = 0x6866732b (hfs+)
  fdFlags              = 0000000100000000
                       . kHasBeenInited
...
  # Data Fork
  logicalSize          = 0 bytes
  # Resource Fork
  logicalSize          = 0 bytes

We see that ID 1927095 corresponds to the other reference we just created: file2. The properties of this reference are similar to those of the other reference file1. They do differ in their file IDs. (They are indeed two separate on-disk file system objects.) They also differ in their previous and next links in the hard link chain. (We can confirm that file1 and file2 are connected together.)

The file’s content is in the indirect node file, which also now is the on-disk object with the original file ID (1927091). Let us use HFSDebug to look at that file.

$ sudo hfsdebug -c 1927091
...
  path                 = Leopard HD:/%0000%0000%0000%0000HFS+ Private Data/iNode1927091
# Catalog File Record
  type                 = file
  file ID              = 1927095
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
  reserved1            = 1927095 (first link ID)
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
...
  # Finder Info
  fdType               = 0x686c6e6b (hlnk)
  fdCreator            = 0x6866732b (hfs+)
  fdFlags              = 0000000100000000
                       . kHasBeenInited
...
  # Data Fork
  logicalSize          = 14 bytes
  totalBlocks          = 1
  fork temperature     = no HFC record in B-Tree
  clumpSize            = 0
  extents              =   startBlock   blockCount      % of file
                             0xbb04b7          0x1       100.00 %
                         1 allocation blocks in 1 extents total.
                         1.00 allocation blocks per extent on an average.
  # Resource Fork
  logicalSize          = 0 bytes

As we see, the indirect-node file acts as the container for several of the original file’s properties. In particular, it has the original file’s content, the owner ID, and the group ID. A reserved field (reserved1) even contains the ID of the head of the hard link chain.

Of course, these are implementation details. HFS+ will show you the expected hard link semantics when you look at these files through the usual file system interfaces.

$ ls -las file1 file2
1927091 8 -rw-r--r--  2 singh  wheel  14 Nov  3 21:55 file1
1927091 8 -rw-r--r--  2 singh  wheel  14 Nov  3 21:55 file2
$ cat file1 file2
This is file1
This is file1

We see that both file1 and file2 show up with identical metadata, including the same “inode” number as you would expect. They also “have” the same content.

Note that if you now delete one of the hard links, say, file2, things will not revert back to how they were to begin with. You will have file1 as the only hard-link file along with the indirect-node file.

Let us look at directory hard links next.

Directory Hard Links

It’s not straightforward to create a directory hard link on Mac OS X. Well, that shouldn’t be surprising: directory hard links aren’t meant for third party developers, let alone users. They are essentially an implementation detail needed to make the Time Machine feature of Leopard work. Since we are talking about implementation details here, we will have to create a directory hard link or two—for experimentation, of course.

Leopard at the time of this writing requires the following conditions to be met for a directory hard link’s creation to be allowed. In the following list, “source” refers to the existing directory that will be pointed at by the new directory hard link “destination” that’s being created.

  • The file system must be journaled HFS+.
  • The parent directories of the source and destination must be different.
  • The source’s parent must not be the root directory.
  • The destination must not be in the root directory.
  • The destination must not be a descendent of the source.
  • The destination must not have any ancestor that’s a directory hard link.

If you meet all these conditions, you could create a directory hard link on an HFS+ volume under Mac OS X 10.5 and above. It’s then a matter of writing a program that uses the link() system call.

/* dirlink.c */

#include <stdio.h>
#include <unistd.h>

int
main(int argc, char** argv)
{
    int ret = -1;
    if (argc == 3) {
        ret = link(argv[1], argv[2]);
        if (ret) {
            perror("link");
        }
    }
    return ret;
}

In our /tmp/test/ testing directory, we’ll create a directory dir1 and a subdirectory subdir. It’s in subdir that we’ll create a hard link dir2 to dir1. This is because dir1 and dir2 can’t have the same parent.

$ gcc -Wall -o dirlink dirlink.c
$ mkdir dir1
$ mkdir subdir

Before we create the directory hard link, let us use HFSDebug to peek at the current on-disk details of dir1.

$ sudo hfsdebug dir1
...
  path                 = Leopard HD:/private/tmp/test/dir1
# Catalog Folder Record
  type                 = folder
  folder ID            = 1927398
  flags                = 0000000000000000
  valence              = 0
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
  fileMode             = drwxr-xr-x
  linkCount            = 1
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  frRect               = (top = 0, left = 0), (bottom = 0, right = 0)
  frFlags              = 0000000000000000
  frLocation           = (v = 0, h = 0)
  opaque               = 0
  # Opaque Finder Info
  scrollPosition       = (v = 0, h = 0)
  reserved1            = 0
  Opaque Finder Flags  = 0000000000000000
  reserved2            = 0
  putAwayFolderID      = 0

Let us create the link and confirm that our expectations of directory hard link semantics are met.

$ ./dirlink dir1 subdir/dir2
$ ls -lasdi dir1 subdir/dir2
1927398 0 drwxr-xr-x  2 singh  wheel  68 Nov  3 22:59 dir1
1927398 0 drwxr-xr-x  2 singh  wheel  68 Nov  3 22:59 subdir/dir2
$ echo Hello > dir1/file
$ cat subdir/dir2/file
Hello

Everything looks in order. Let us now use HFSDebug to see what actually happened inside the file system. We looked at dir1‘s on-disk details earlier. We can now see what changed after we created a directory hard link to dir1.

$ sudo hfsdebug dir1
...
  path                 = Leopard HD:/private/tmp/test/dir1
# Catalog File Record
  type                 = file (alias, directory hard link)
  indirect folder      = Leopard HD:/.HFS+ Private Directory Data%000d/dir_1927398
  file ID              = 1927407
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
...
  # BSD Info
  ownerID              = 1927408 (previous link ID)
  groupID              = 0 (next link ID)
  adminFlags           = 00000000
  ownerFlags           = 00000010
                       . UF_IMMUTABLE (file may not be changed)
  fileMode             = -r--r--r--
  iNodeNum             = 1927398 (link reference number)
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0x66647270 (fdrp)
  fdCreator            = 0x4d414353 (MACS)
  fdFlags              = 1000000000000000
                       . kIsAlias
  fdLocation           = (v = 0, h = 0)
  opaque               = 0
  # Data Fork
  logicalSize          = 0 bytes
  # Resource Fork
  logicalSize          = 464 bytes
  totalBlocks          = 1
  fork temperature     = no HFC record in B-Tree
  clumpSize            = 0
  extents              =   startBlock   blockCount      % of file
                             0xbae746          0x1       100.00 %
                         1 allocation blocks in 1 extents total.
                         1.00 allocation blocks per extent on an average.

  rsrc contents        = (up to 464 bytes)
       00 00 01 00 00 00 01 9e 00 00 00 9e 00 00 00 32 00 00 00 00 00 00 00 00
                                                     2
...
       00 00 00 00 00 00 00 1c 00 32 00 00 61 6c 69 73 00 00 00 0a 00 00 ff ff
                                   2        a  l  i  s
       00 00 00 00 00 00 00 00

We see that dir1‘s transformation is more drastic than what we had observed in the case of file hard links. After we created a directory hard link to dir1, it’s no longer a directory inside the file system. In fact, the “real” directory (that is, the link target) has moved to a special folder (/.HFS+ Private Directory Data\xd), just as the link target of a file hard link had moved to a (different) special folder. Its name within the special folder is dir_1927398, where the number represents the original “inode” number of dir1. However, dir1 hasn’t been replaced by another directory that points to the link target—it has been replaced by a file, or specifically, an alias. (Backward compatibility!) The immutable alias file has fdrp and MACS as its type and creator codes, respectively. It also as a resource fork. Moreover, we see that as in the case of file hard links, there exists a hard link chain.

Let us also examine the link target using HFSDebug. The path would be “hard” to type because of the characters in the special folder’s name. We can use the folder ID instead, which would be the original ID of dir1.

$ sudo hfsdebug -c 1927398
...
  path                 = Leopard HD:/.HFS+ Private Directory Data%000d/dir_1928557
# Catalog Folder Record
  type                 = folder
  folder ID            = 1927398
  flags                = 0000000000100100
                       . Folder has extended attributes.
                       . Folder has hardlink chain.
  valence              = 0
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
  fileMode             = drwxr-xr-x
  linkCount            = 2
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  frRect               = (top = 0, left = 0), (bottom = 0, right = 0)
  frFlags              = 0000000000000000
  frLocation           = (v = 0, h = 0)
  opaque               = 0
...
# Attributes
...
  # Attribute Key
  keyLength            = 72
  pad                  = 0
  fileID               = 1927398
  startBlock           = 0
  attrNameLen          = 30
  attrName             = com.apple.system.hfs.firstlink
  # Inline Data
  recordType           = 0x10
  reserved[0]          = 0
  reserved[1]          = 0
  attrSize             = 8 bytes
  attrData             = 31 39 32 37 34 30 38 00
                          1  9  2  7  4  0  8     

We see mostly what we would expect given our previous observation of the implementation details of file hard links. There is one more thing in this case though: the folder has an extended attribute whose name is com.apple.system.hfs.firstlink and whose value is an encoding of the “inode” number of the head of the directory hard link chain.

HFSDebug Filters

At this point, you could use the built-in dirhardlink filter in HFSDebug to enumerate all directory hard links on the volume.

$ sudo hfsdebug --filter=builtin:dirhardlink
2 links -> dir_1927398
Leopard HD:/private/tmp/test/dir1 -> dir_1927398
Leopard HD:/private/tmp/test/subdir/dir2 -> dir_1927398

The filter prints both link targets and link references. For a link target, the number of references to it is printed before it. For a link reference, the target that it points to is printed after it.

By the way, filters are a very useful recent addition to HFSDebug. A fundamental capability of HFSDebug is to go over all the entries in the HFS+ catalog file. It uses this capability to generate many types of statistics. The recently added filter support makes it possible for you to write a program that plugs into HFSDebug and receives a callback for each catalog file entry. That way, you can examine each entry, apply arbitrary criteria, and show (or not show) details about that entry. Say, you wish to list all setuid/setgid files on an HFS+ volume. Sure, you could run a find command to do that. On one of my HFS+ volumes with about a million files and 200K folders, find takes a while to do this.

$ time sudo find / -xdev -type f \( -perm -4000 -o -perm -2000 \)
...
6.41s user 94.12s system 34% cpu 4:53.35 total
$

You could do this much faster with the sxid built-in HFSDebug filter, whose implementation is a mere ten lines of C code. (Of course, the absolute time taken will also depend on the underlying hardware, but we are only interested in the relative time difference.)

$ time sudo hfsdebug --filter=builtin:sxid
...
2.86s user 9.33s system 17% cpu 1:08.04 total
$

Note that many types of searches on HFS+ can also be done through the searchfs() system call, although it can be quite cumbersome to use. Of course, searchfs() cannot be done on an unmounted volume.

Specifying File System Objects by Path on Unmounted Volumes

As we have seen, a common use for HFSDebug is to have it display implementation details of individual file system objects. You could specify the object of interest in several ways: by providing its catalog node ID (CNID), by providing an “fsspec” style pair consisting of the parent folder’s CNID and the object’s name, or by providing a POSIX-style path to the object. The latter is often the easiest and most convenient to specify. However, until now, HFSDebug did not do component-wise path lookups itself—it used the operating system to convert the path to an inode number. This results in a few caveats. To begin with, it’s against the HFSDebug philosophy of not relying on the operating system for any HFS+-related operations. It also means that if the volume in question is not mounted (say, it’s corrupt and can’t be mounted or you are investigating something and don’t want to mount it), you can’t use paths to look at individual objects. You will have to dump all objects on the file system and then find the node ID of the object of interest. Moreover, even on a mounted volume, the operating system disallows path-based access to several files. (See Chapter 12 of Mac OS X Internals.) In such cases, again you will need to know the node ID of the object of interest, even on a mounted volume.

I’ve “fixed this issue” (or “added the feature”, depending on how you look at it) in the new version of HFSDebug. Say, if you have an unmounted volume on /dev/disk5s1 and you want to examine /tmp/foo/bar on it. Now you can simply do:

$ sudo hfsdebug -d /dev/disk5s1 /tmp/foo/bar
...

The semantics of symbolic link resolution are as follows. If the object (bar in this example) is a symbolic link itself, then HFSDebug will show you properties of bar and not what it points to. This is in line with HFSDebug philosophy and also how things work today on mounted volumes. If, however, a nonterminal component of the path is a symbolic link, HFSDebug will resolve it. Again, this is desirable.

That’s about it.

One More Thing

I can’t talk about HFSDebug’s Snow Leopard-specific features since the latter is under NDA. If you do have access to the latest Snow Leopard seed, try HFSDebug on it. For example, examine some standard Mac OS X files using HFSDebug.

$ sudo hfsdebug /bin/ls
...
$ sudo hfsdebug /etc/asl.conf
...
$ sudo hfsdebug /Applications/Mail.app/Contents/PkgInfo
...


All contents of this site, unless otherwise noted, are ©1994-2014 Amit Singh. All Rights Reserved.