HFSDebug 4.0 and New HFS+ Features

I wrote HFSDebug in early 2004. I initially made it available as a software tool to help understand fragmentation in HFS+ volumes, although it could also be used to analyze several implementation details of HFS+. Eventually, I extended HFSDebug to be able to analyze all on-disk aspects of HFS+, along with the ability to compute more types of volume statistics and to even retrieve some in-memory details of mounted HFS+ volumes.

HFSDebug has been an extremely useful tool for me. I’ve used it to help explain the workings of HFS+ in the Mac OS X Internals book, to understand occasional mysterious behavior in HFS+ volumes, to search for file system objects, to generate interesting file system statistics (top N largest files, top N fragmented files, resource forks vs data forks, contiguous free space, and so on), and to create interesting demos. (For example, to show HFS+ mechanisms such as Hot File Clustering and On-the-Fly Defragmentation at work.)

HFS+ is the preferred and default volume format on Mac OS X. Even with exciting new developments such as ZFS support in Mac OS X, I don’t expect HFS+ to become obsolete any time soon. Today’s Macintosh computers, iPods, iPhones, and AppleTV’s all use HFS+.

With most major releases of Mac OS X, HFS+ has gained new capabilities. Features such as metadata journaling, on-the-fly file defragmentation, hot file clustering, extended attributes, access control lists, hard link chains (tracking hard links), and directory hard links have come to HFS+ in recent years. There have been “news reports” of compression being an upcoming feature in HFS+.

The most interesting technical things about HFS+ are not its features, but how several of the newer features are implemented. With the goal of retaining backward compatibility, new features have often been retrofitted, or shoehorned, if you will, into HFS+. Knowing such implementation details evokes different reactions in different people, ranging from “That’s a nifty way to implement this!” to “Gross!” This is something you can decide for yourself with the help of HFSDebug, which can show you exactly how the file system works.

New Features in HFSDebug

Now, every time a new feature is added to HFS+, HFSDebug likely (but not always) needs to be updated, say, to recognize and parse a new type of on-disk object such as a directory hard link. I’m releasing a new version of HFSDebug that has the following improvements.

  • Ability to show details of directory hard links.
  • Ability to show details of hard link chains.
  • New built-in filters: atime (find files by access time), dirhardlink (list directory hard links), hardlink (list file hard links), and sxid (list setuid/setgid files).
  • Ability to do component-wise path lookup from scratch, allowing you to analyze individual file system objects by path even on unmounted HFS+ volume.
  • Support for Snow Leopard.
  • Numerous subtle improvements and some bugfixes.

Still a PowerPC binary!?

It may be surprising (or troubling) to some of you that there is still no x86 version of HFSDebug: it’s available only as a PowerPC executable. Well, there is some logic to this madness. You see, I wrote HFSDebug in the “Panther” (10.3) days. Mac OS X was PowerPC-only then. It was also big endian. That matters because HFS+ uses big endian for its on-disk structures.

HFSDebug is a complex program. It essentially reads raw data from an HFS+ disk (say, a partition on a real disk or a disk image) and recreates a read-only HFS+ file system in memory. To simplify matters, I decided to skip structure-by-structure, field-by-field endianness conversion—after all, I was only targeting the big-endian-only Mac OS X. By contrast, the xnu kernel’s HFS+ implementation does do byte swapping on x86. So does the fsck_hfs program.

As long as Rosetta exists, HFSDebug can get away with being a PowerPC executable, allowing me to defer the grunt work of swapping bytes to a later date.

Let us take the new HFSDebug features for a spin.

Hard Link Chains

Although support for file hard links has been there in HFS+ before Leopard, the new chaining feature in Leopard can keep track of hard link chains, which are doubly linked list of file IDs connecting hard links together. Hard links to a file on HFS+ are conceptually similar to those on Unix systems: They represent multiple directory entries referring to common file content. Implementation-wise, HFS+ hard links use a special hard-link file for each directory entry. The common file content is stored in a special file: the indirect-node file. All indirect-node files are stored in the private metadata folder, a special directory (/\xE2\x90\x80\xE2\x90\x80\xE2\x90\x80\xE2\x90\x80HFS+ Private Data) that’s both normally invisible to the user and has a name that’s “hard” to type. It’s much easier to understand this through HFSDebug.

We begin by creating a file called file1. Before we create a hard link to this file, we examine its details using HFSDebug. That way, we can tell if anything about the file changes after link creation.

$ mkdir /tmp/test/
$ cd /tmp/test
$ echo "This is file1" > file1
$ sudo hfsdebug file1
...
  path                 = Leopard HD:/private/tmp/test/file1
# Catalog File Record
  type                 = file
  file ID              = 1927091
  flags                = 0000000000000010
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
  fileMode             = -rw-r--r--
  linkCount            = 1
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0
  fdCreator            = 0
  fdFlags              = 0000000000000000
...

Let us now make a hard link to this file.

$ ln file1 file2

Let us see if anything has changed about file1 now that we made a hard link to it.

$ sudo hfsdebug file1
...
  path                 = Leopard HD:/private/tmp/test/file1
# Catalog File Record
  type                 = file (hard link)
  indirect node file   = Leopard HD:/%0000%0000%0000%0000HFS+ Private Data/iNode1927091
  file ID              = 1927094
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
...
  # BSD Info
  ownerID              = 1927095 (previous link ID)
  groupID              = 0 (next link ID)
  adminFlags           = 00000000
  ownerFlags           = 00000010
                       . UF_IMMUTABLE (file may not be changed)
  fileMode             = -r--r--r--
  iNodeNum             = 1927091 (link reference number)
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0x686c6e6b (hlnk)
  fdCreator            = 0x6866732b (hfs+)
  fdFlags              = 0000000100000000
                       . kHasBeenInited
...
  # Data Fork
  logicalSize          = 0 bytes
  # Resource Fork
  logicalSize          = 0 bytes

We see that a lot has changed! The on-disk nature of file1 has completely transformed. The original content has actually “moved” to an indirect-node file. What was file1 before has been replaced with a new directory entry altogether: one that has a new file ID within the file system. The new directory entry is also a file, but with several special properties. Its “type” and “creator” (as stored in the Finder Info) are hlnk and hfs+, respectively. It has been marked immutable. It has no content in either its data fork or its resource fork. Moreover, the owner and group ID on-disk fields have been repurposed to act as the previous and next links, respectively, in the hard link chain. We see that the previous link ID is 1927095. Let us use HFSDebug to show us information for that ID.

$ sudo hfsdebug -c 1927095
...
  path                 = Leopard HD:/private/tmp/test/file2
# Catalog File Record
  type                 = file (hard link)
  indirect node file   = Leopard HD:/%0000%0000%0000%0000HFS+ Private Data/iNode1927091
  file ID              = 1927095
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
...
  # BSD Info
  ownerID              = 0 (previous link ID)
  groupID              = 1927094 (next link ID)
  adminFlags           = 00000000
  ownerFlags           = 00000010
                       . UF_IMMUTABLE (file may not be changed)
  fileMode             = -r--r--r--
  iNodeNum             = 1927091 (link reference number)
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0x686c6e6b (hlnk)
  fdCreator            = 0x6866732b (hfs+)
  fdFlags              = 0000000100000000
                       . kHasBeenInited
...
  # Data Fork
  logicalSize          = 0 bytes
  # Resource Fork
  logicalSize          = 0 bytes

We see that ID 1927095 corresponds to the other reference we just created: file2. The properties of this reference are similar to those of the other reference file1. They do differ in their file IDs. (They are indeed two separate on-disk file system objects.) They also differ in their previous and next links in the hard link chain. (We can confirm that file1 and file2 are connected together.)

The file’s content is in the indirect node file, which also now is the on-disk object with the original file ID (1927091). Let us use HFSDebug to look at that file.

$ sudo hfsdebug -c 1927091
...
  path                 = Leopard HD:/%0000%0000%0000%0000HFS+ Private Data/iNode1927091
# Catalog File Record
  type                 = file
  file ID              = 1927095
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
  reserved1            = 1927095 (first link ID)
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
...
  # Finder Info
  fdType               = 0x686c6e6b (hlnk)
  fdCreator            = 0x6866732b (hfs+)
  fdFlags              = 0000000100000000
                       . kHasBeenInited
...
  # Data Fork
  logicalSize          = 14 bytes
  totalBlocks          = 1
  fork temperature     = no HFC record in B-Tree
  clumpSize            = 0
  extents              =   startBlock   blockCount      % of file
                             0xbb04b7          0x1       100.00 %
                         1 allocation blocks in 1 extents total.
                         1.00 allocation blocks per extent on an average.
  # Resource Fork
  logicalSize          = 0 bytes

As we see, the indirect-node file acts as the container for several of the original file’s properties. In particular, it has the original file’s content, the owner ID, and the group ID. A reserved field (reserved1) even contains the ID of the head of the hard link chain.

Of course, these are implementation details. HFS+ will show you the expected hard link semantics when you look at these files through the usual file system interfaces.

$ ls -las file1 file2
1927091 8 -rw-r--r--  2 singh  wheel  14 Nov  3 21:55 file1
1927091 8 -rw-r--r--  2 singh  wheel  14 Nov  3 21:55 file2
$ cat file1 file2
This is file1
This is file1

We see that both file1 and file2 show up with identical metadata, including the same “inode” number as you would expect. They also “have” the same content.

Note that if you now delete one of the hard links, say, file2, things will not revert back to how they were to begin with. You will have file1 as the only hard-link file along with the indirect-node file.

Let us look at directory hard links next.

Directory Hard Links

It’s not straightforward to create a directory hard link on Mac OS X. Well, that shouldn’t be surprising: directory hard links aren’t meant for third party developers, let alone users. They are essentially an implementation detail needed to make the Time Machine feature of Leopard work. Since we are talking about implementation details here, we will have to create a directory hard link or two—for experimentation, of course.

Leopard at the time of this writing requires the following conditions to be met for a directory hard link’s creation to be allowed. In the following list, “source” refers to the existing directory that will be pointed at by the new directory hard link “destination” that’s being created.

  • The file system must be journaled HFS+.
  • The parent directories of the source and destination must be different.
  • The source’s parent must not be the root directory.
  • The destination must not be in the root directory.
  • The destination must not be a descendent of the source.
  • The destination must not have any ancestor that’s a directory hard link.

If you meet all these conditions, you could create a directory hard link on an HFS+ volume under Mac OS X 10.5 and above. It’s then a matter of writing a program that uses the link() system call.

/* dirlink.c */

#include <stdio.h>
#include <unistd.h>

int
main(int argc, char** argv)
{
    int ret = -1;
    if (argc == 3) {
        ret = link(argv[1], argv[2]);
        if (ret) {
            perror("link");
        }
    }
    return ret;
}

In our /tmp/test/ testing directory, we’ll create a directory dir1 and a subdirectory subdir. It’s in subdir that we’ll create a hard link dir2 to dir1. This is because dir1 and dir2 can’t have the same parent.

$ gcc -Wall -o dirlink dirlink.c
$ mkdir dir1
$ mkdir subdir

Before we create the directory hard link, let us use HFSDebug to peek at the current on-disk details of dir1.

$ sudo hfsdebug dir1
...
  path                 = Leopard HD:/private/tmp/test/dir1
# Catalog Folder Record
  type                 = folder
  folder ID            = 1927398
  flags                = 0000000000000000
  valence              = 0
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
  fileMode             = drwxr-xr-x
  linkCount            = 1
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  frRect               = (top = 0, left = 0), (bottom = 0, right = 0)
  frFlags              = 0000000000000000
  frLocation           = (v = 0, h = 0)
  opaque               = 0
  # Opaque Finder Info
  scrollPosition       = (v = 0, h = 0)
  reserved1            = 0
  Opaque Finder Flags  = 0000000000000000
  reserved2            = 0
  putAwayFolderID      = 0

Let us create the link and confirm that our expectations of directory hard link semantics are met.

$ ./dirlink dir1 subdir/dir2
$ ls -lasdi dir1 subdir/dir2
1927398 0 drwxr-xr-x  2 singh  wheel  68 Nov  3 22:59 dir1
1927398 0 drwxr-xr-x  2 singh  wheel  68 Nov  3 22:59 subdir/dir2
$ echo Hello > dir1/file
$ cat subdir/dir2/file
Hello

Everything looks in order. Let us now use HFSDebug to see what actually happened inside the file system. We looked at dir1‘s on-disk details earlier. We can now see what changed after we created a directory hard link to dir1.

$ sudo hfsdebug dir1
...
  path                 = Leopard HD:/private/tmp/test/dir1
# Catalog File Record
  type                 = file (alias, directory hard link)
  indirect folder      = Leopard HD:/.HFS+ Private Directory Data%000d/dir_1927398
  file ID              = 1927407
  flags                = 0000000000100010
                       . File has a thread record in the catalog.
                       . File has hardlink chain.
...
  # BSD Info
  ownerID              = 1927408 (previous link ID)
  groupID              = 0 (next link ID)
  adminFlags           = 00000000
  ownerFlags           = 00000010
                       . UF_IMMUTABLE (file may not be changed)
  fileMode             = -r--r--r--
  iNodeNum             = 1927398 (link reference number)
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  fdType               = 0x66647270 (fdrp)
  fdCreator            = 0x4d414353 (MACS)
  fdFlags              = 1000000000000000
                       . kIsAlias
  fdLocation           = (v = 0, h = 0)
  opaque               = 0
  # Data Fork
  logicalSize          = 0 bytes
  # Resource Fork
  logicalSize          = 464 bytes
  totalBlocks          = 1
  fork temperature     = no HFC record in B-Tree
  clumpSize            = 0
  extents              =   startBlock   blockCount      % of file
                             0xbae746          0x1       100.00 %
                         1 allocation blocks in 1 extents total.
                         1.00 allocation blocks per extent on an average.

  rsrc contents        = (up to 464 bytes)
       00 00 01 00 00 00 01 9e 00 00 00 9e 00 00 00 32 00 00 00 00 00 00 00 00
                                                     2
...
       00 00 00 00 00 00 00 1c 00 32 00 00 61 6c 69 73 00 00 00 0a 00 00 ff ff
                                   2        a  l  i  s
       00 00 00 00 00 00 00 00

We see that dir1‘s transformation is more drastic than what we had observed in the case of file hard links. After we created a directory hard link to dir1, it’s no longer a directory inside the file system. In fact, the “real” directory (that is, the link target) has moved to a special folder (/.HFS+ Private Directory Data\xd), just as the link target of a file hard link had moved to a (different) special folder. Its name within the special folder is dir_1927398, where the number represents the original “inode” number of dir1. However, dir1 hasn’t been replaced by another directory that points to the link target—it has been replaced by a file, or specifically, an alias. (Backward compatibility!) The immutable alias file has fdrp and MACS as its type and creator codes, respectively. It also as a resource fork. Moreover, we see that as in the case of file hard links, there exists a hard link chain.

Let us also examine the link target using HFSDebug. The path would be “hard” to type because of the characters in the special folder’s name. We can use the folder ID instead, which would be the original ID of dir1.

$ sudo hfsdebug -c 1927398
...
  path                 = Leopard HD:/.HFS+ Private Directory Data%000d/dir_1928557
# Catalog Folder Record
  type                 = folder
  folder ID            = 1927398
  flags                = 0000000000100100
                       . Folder has extended attributes.
                       . Folder has hardlink chain.
  valence              = 0
...
  # BSD Info
  ownerID              = 501 (singh)
  groupID              = 0 (wheel)
  adminFlags           = 00000000
  ownerFlags           = 00000000
  fileMode             = drwxr-xr-x
  linkCount            = 2
  textEncoding         = 0
  attrBlocks           = 0
  # Finder Info
  frRect               = (top = 0, left = 0), (bottom = 0, right = 0)
  frFlags              = 0000000000000000
  frLocation           = (v = 0, h = 0)
  opaque               = 0
...
# Attributes
...
  # Attribute Key
  keyLength            = 72
  pad                  = 0
  fileID               = 1927398
  startBlock           = 0
  attrNameLen          = 30
  attrName             = com.apple.system.hfs.firstlink
  # Inline Data
  recordType           = 0x10
  reserved[0]          = 0
  reserved[1]          = 0
  attrSize             = 8 bytes
  attrData             = 31 39 32 37 34 30 38 00
                          1  9  2  7  4  0  8     

We see mostly what we would expect given our previous observation of the implementation details of file hard links. There is one more thing in this case though: the folder has an extended attribute whose name is com.apple.system.hfs.firstlink and whose value is an encoding of the “inode” number of the head of the directory hard link chain.

HFSDebug Filters

At this point, you could use the built-in dirhardlink filter in HFSDebug to enumerate all directory hard links on the volume.

$ sudo hfsdebug --filter=builtin:dirhardlink
2 links -> dir_1927398
Leopard HD:/private/tmp/test/dir1 -> dir_1927398
Leopard HD:/private/tmp/test/subdir/dir2 -> dir_1927398

The filter prints both link targets and link references. For a link target, the number of references to it is printed before it. For a link reference, the target that it points to is printed after it.

By the way, filters are a very useful recent addition to HFSDebug. A fundamental capability of HFSDebug is to go over all the entries in the HFS+ catalog file. It uses this capability to generate many types of statistics. The recently added filter support makes it possible for you to write a program that plugs into HFSDebug and receives a callback for each catalog file entry. That way, you can examine each entry, apply arbitrary criteria, and show (or not show) details about that entry. Say, you wish to list all setuid/setgid files on an HFS+ volume. Sure, you could run a find command to do that. On one of my HFS+ volumes with about a million files and 200K folders, find takes a while to do this.

$ time sudo find / -xdev -type f \( -perm -4000 -o -perm -2000 \)
...
6.41s user 94.12s system 34% cpu 4:53.35 total
$

You could do this much faster with the sxid built-in HFSDebug filter, whose implementation is a mere ten lines of C code. (Of course, the absolute time taken will also depend on the underlying hardware, but we are only interested in the relative time difference.)

$ time sudo hfsdebug --filter=builtin:sxid
...
2.86s user 9.33s system 17% cpu 1:08.04 total
$

Note that many types of searches on HFS+ can also be done through the searchfs() system call, although it can be quite cumbersome to use. Of course, searchfs() cannot be done on an unmounted volume.

Specifying File System Objects by Path on Unmounted Volumes

As we have seen, a common use for HFSDebug is to have it display implementation details of individual file system objects. You could specify the object of interest in several ways: by providing its catalog node ID (CNID), by providing an “fsspec” style pair consisting of the parent folder’s CNID and the object’s name, or by providing a POSIX-style path to the object. The latter is often the easiest and most convenient to specify. However, until now, HFSDebug did not do component-wise path lookups itself—it used the operating system to convert the path to an inode number. This results in a few caveats. To begin with, it’s against the HFSDebug philosophy of not relying on the operating system for any HFS+-related operations. It also means that if the volume in question is not mounted (say, it’s corrupt and can’t be mounted or you are investigating something and don’t want to mount it), you can’t use paths to look at individual objects. You will have to dump all objects on the file system and then find the node ID of the object of interest. Moreover, even on a mounted volume, the operating system disallows path-based access to several files. (See Chapter 12 of Mac OS X Internals.) In such cases, again you will need to know the node ID of the object of interest, even on a mounted volume.

I’ve “fixed this issue” (or “added the feature”, depending on how you look at it) in the new version of HFSDebug. Say, if you have an unmounted volume on /dev/disk5s1 and you want to examine /tmp/foo/bar on it. Now you can simply do:

$ sudo hfsdebug -d /dev/disk5s1 /tmp/foo/bar
...

The semantics of symbolic link resolution are as follows. If the object (bar in this example) is a symbolic link itself, then HFSDebug will show you properties of bar and not what it points to. This is in line with HFSDebug philosophy and also how things work today on mounted volumes. If, however, a nonterminal component of the path is a symbolic link, HFSDebug will resolve it. Again, this is desirable.

That’s about it.

One More Thing

I can’t talk about HFSDebug’s Snow Leopard-specific features since the latter is under NDA. If you do have access to the latest Snow Leopard seed, try HFSDebug on it. For example, examine some standard Mac OS X files using HFSDebug.

$ sudo hfsdebug /bin/ls
...
$ sudo hfsdebug /etc/asl.conf
...
$ sudo hfsdebug /Applications/Mail.app/Contents/PkgInfo
...

Comments are closed.


All contents of this site, unless otherwise noted, are ©1994-2014 Amit Singh. All Rights Reserved.