Why Macs have millions of tiny files


So, you agree with the article, then.

citation, please.

Those zillions of stray files don’t seem to consume much space, but tools like SuperDuper! and Disk Utility “verify disk” need to process each and every one of them in one way or another.

You are probably archiving to a different filesystem; ._filename is the resource fork for filename on a non-HFS volume. I’m not sure if the resource fork gets pulled out the same way upon creating a tarball or compressed archive. I can’t even find anything with a resource fork on this recentish clean install.

EDIT: Not to be confused with .DS_STORE, which is the Finder’s metadata for a folder’s default view, window position, and so forth.

I’ve heard that the computer itself is actually made of trillions of tiny molecules. Is that true? Golly, how am I ever going to keep track of all that? Thanks Apple. (sad trombone)


(couldn’t find an animated one)


You know what would be great? If you Windows Fanbois weren’t so quick to assume that everything Mac was bad. Steve Jobs designed those millions of tiny flies because He knew that millions of tiny flies would offer a better user experience for everyone except Neanderthals who are stuck on M$ Wind0wz. You think you don’t want millions of tiny flies, but the moment Windows 10 comes out with millions of tiny flies you’re going to be loving them. Well, don’t forget to thank Steve Jobs. In heaven.


Each of those flies is a reincarnation of Steve Jobs.

I asked for a citation of where the article author believes the number of files causes slowdown.

But you provided a quote from “Doug”:

Doug Eldred writes in with a concern about a form of file bloat—but not about bloated sizes. Rather, the sheer number of items that seem to appear on his drive.


Doug continues:

Those zillions of stray files don’t seem to consume much space, but tools like SuperDuper! and Disk Utility “verify disk” need to process each and every one of them in one way or another.

So, back in context, it’s a question that is being asked by a reader, not the author.

And, lo! There is an answer from the author that lays out his opinion on files causing slowdowns:

To my recollection and experience, the number of files shouldn’t
contribute to any system slowdowns, because they’re inert unless needed.
[emphasis added]

The article goes on to be the devil’s advocate and explore whether or not those files could actual cause slowdowns. TL;DR: they don’t.

OK, who prefers a monolithic registry file that is impossible to backup, manage change control, cleanup or repair after corruption?


meh, it pays the mortgage.
after fiddling with OS’s since CPM/DOS and apple II’s etc, I have gotten to the point that you know they all suck just not at the same things. sadly I am too familiar with the bowels of windows registry.

1 Like

Bundling of application including configuration metadata and selected dependencies to support portability?

It is a NeXT design. Yeah! NeXT!

OSX was - in 1998/9 - really in most regards successive version of NeXTStep - with MacOS compatibility and UI conventions.

This Mac compatibility was partially achieved by the App Container bundle. It also assisted in bridging NeXTStep hardware platforms from Motorola 640XX to PowerPC, SPARC and even Intel. Some of these were repeated in MacOS - where a single “Fat Binary” would run multiple OS frameworks or even processor platform, from a single compile by the developer.

NeXT! Yeah!


I really liked the NEXT though only got to do cursory playing around with them in the university lab.

System restore?

This behavior is really endearing when the macs start doing it all over your non-HFS fileserver, let me tell you.


Man. I had “unauthorized” access to the Media Lab NeXT Cube for almost two years. Problem? only 1 MB disk quota… On 1 GB optical.

This ran TIA for me - as my first regular Internet pipe at home - via shell to the NeXT. 1993.

agreed. the correct answer is when to use a database and when to use lots of little files and without knowing a lot more about their architecture, I’m not going to argue that spotlight or time capsule do things WRONG. but really, HF+ can hold more than 2 BILLION files in a single directory. So a few million total should be no big deal.

Some trimming of what spotlight is indexing and how far back you keep backups might go a very long way in improving performance, regardless of DB vs. file system strategy.

The archives are typically ZIP and RAR archives, fwiw.

And what’s with finding a buttmess of .goutputstream files in my linux system? Everything I find on the forums says that it’s more a bug than anything else, so it’s okay to unload them like I’ve been doing.

I thought the whole point of the optical drive was to give each user his own space…