#tl;dr THE ARTICLE SAYS THERE IS NO PROBLEM, THE MILLIONS OF TINY FILES ARE ONE OF THE AWESOME BENEFITS OF LINUX/MAC
So, you agree with the article, then.
#tl;dr THE ARTICLE SAYS THERE IS NO PROBLEM, THE MILLIONS OF TINY FILES ARE ONE OF THE AWESOME BENEFITS OF LINUX/MAC
So, you agree with the article, then.
citation, please.
Those zillions of stray files donāt seem to consume much space, but tools like SuperDuper! and Disk Utility āverify diskā need to process each and every one of them in one way or another.
You are probably archiving to a different filesystem; ._filename is the resource fork for filename on a non-HFS volume. Iām not sure if the resource fork gets pulled out the same way upon creating a tarball or compressed archive. I canāt even find anything with a resource fork on this recentish clean install.
EDIT: Not to be confused with .DS_STORE, which is the Finderās metadata for a folderās default view, window position, and so forth.
Iāve heard that the computer itself is actually made of trillions of tiny molecules. Is that true? Golly, how am I ever going to keep track of all that? Thanks Apple. (sad trombone)
(couldnāt find an animated one)
You know what would be great? If you Windows Fanbois werenāt so quick to assume that everything Mac was bad. Steve Jobs designed those millions of tiny flies because He knew that millions of tiny flies would offer a better user experience for everyone except Neanderthals who are stuck on M$ Wind0wz. You think you donāt want millions of tiny flies, but the moment Windows 10 comes out with millions of tiny flies youāre going to be loving them. Well, donāt forget to thank Steve Jobs. In heaven.
Each of those flies is a reincarnation of Steve Jobs.
I asked for a citation of where the article author believes the number of files causes slowdown.
But you provided a quote from āDougā:
Doug Eldred writes in with a concern about a form of file bloatābut not about bloated sizes. Rather, the sheer number of items that seem to appear on his drive.
[ā¦]
Doug continues:
Those zillions of stray files donāt seem to consume much space, but tools like SuperDuper! and Disk Utility āverify diskā need to process each and every one of them in one way or another.
So, back in context, itās a question that is being asked by a reader, not the author.
And, lo! There is an answer from the author that lays out his opinion on files causing slowdowns:
To my recollection and experience, the number of files shouldnāt
contribute to any system slowdowns, because theyāre inert unless needed. [emphasis added]
The article goes on to be the devilās advocate and explore whether or not those files could actual cause slowdowns. TL;DR: they donāt.
OK, who prefers a monolithic registry file that is impossible to backup, manage change control, cleanup or repair after corruption?
meh, it pays the mortgage.
after fiddling with OSās since CPM/DOS and apple IIās etc, I have gotten to the point that you know they all suck just not at the same things. sadly I am too familiar with the bowels of windows registry.
Bundling of application including configuration metadata and selected dependencies to support portability?
It is a NeXT design. Yeah! NeXT!
OSX was - in 1998/9 - really in most regards successive version of NeXTStep - with MacOS compatibility and UI conventions.
This Mac compatibility was partially achieved by the App Container bundle. It also assisted in bridging NeXTStep hardware platforms from Motorola 640XX to PowerPC, SPARC and even Intel. Some of these were repeated in MacOS - where a single āFat Binaryā would run multiple OS frameworks or even processor platform, from a single compile by the developer.
NeXT! Yeah!
I really liked the NEXT though only got to do cursory playing around with them in the university lab.
System restore?
This behavior is really endearing when the macs start doing it all over your non-HFS fileserver, let me tell you.
Man. I had āunauthorizedā access to the Media Lab NeXT Cube for almost two years. Problem? only 1 MB disk quotaā¦ On 1 GB optical.
This ran TIA for me - as my first regular Internet pipe at home - via shell to the NeXT. 1993.
agreed. the correct answer is when to use a database and when to use lots of little files and without knowing a lot more about their architecture, Iām not going to argue that spotlight or time capsule do things WRONG. but really, HF+ can hold more than 2 BILLION files in a single directory. So a few million total should be no big deal.
Some trimming of what spotlight is indexing and how far back you keep backups might go a very long way in improving performance, regardless of DB vs. file system strategy.
The archives are typically ZIP and RAR archives, fwiw.
And whatās with finding a buttmess of .goutputstream files in my linux system? Everything I find on the forums says that itās more a bug than anything else, so itās okay to unload them like Iāve been doing.
I thought the whole point of the optical drive was to give each user his own spaceā¦