This is the fastest way to alphabetize 1,000+ books (or anything else)

Somewhere in that search was a link to this beautiful object:

7 Likes

Sometimes that’s hard.

Cory’s mentioned the Dragaera series by Steven Brust in the past; most of the ones I own are collected in two or three book omnibuses.

The very first, The Book of Jhereg, collects the first three books published, or the fourth, second, and fifth books chronologically. The second omnibus has books one and six. The third has books eight and nine, the fourth has two and ten. Then we go to eleven and seven. The last few haven’t been collected in omnibuses yet, but would be twelve, ???, and fourteen (with thirteen being the next book to be written). I list Tiassa as ??? because it has multiple stories, one of which takes place definitively before book four, and one definitively after book eleven (in a foreword, the author says where each of the stories in that book belong within the chronology, but his placement doesn’t makes sense). To add to the confusion, part of Dragon (book two) takes place after Yendi (book three), and part of Jhereg (book four) takes place before Taltos (book one)…

…I put them in publication order, and probably would do so even if I didn’t have them in omnibus format, because trying to reconcile it otherwise would probably lead to a nervous breakdown.

When it makes sense, though (like the Dune, Prelude to Dune, and Legends of Dune series, or the Star Wars Expanded Universe), I go chronological within the universe.

2 Likes

Is that real, or just an artist’s conception?

I worked in a library shelving books as my after-school job in high school, so I did this literally every day (though not thousands of books at a a time). I then worked as a part time file clerk during college, where I studied computer programming.

Radix Sort was always my preferred method for alphabetizing, but then I’m not a computer.

2 Likes

I might consider this grounds for divorce.

I don’t think I’m even kidding. Or, at least, I would never have considered marrying the sort of person who thinks that the most important purpose of a book is to be decorative. And now I want to apologize to you for saying mean things about your wife, but still. :frowning:

Edit:

Well, that’s something, at least.

…never mind.

1 Like

Less to do with decoration and more to do with her background in print
design.

1 Like

An artist made 3 copies. So an artist’s conception, but real, yet unavailable.

2 Likes

Neat. So this is an artist’s conception of something that exists but is not a commercial product?

I had a pad of scratch paper that followed this same concept, but was not as cool looking. Nothing ever looks as cool as the artist’s conception.

If the goal is a pretty paper sculpture with no practical use, I would like to see this as an HSV cone.

2 Likes

Oh, it’s photos of something that was actually made. Sorry if that was unclear.

Mine didn’t look as cool :frowning:

Another article by the quicksort mafia ;->

Now, as this video is an allegory meant to demonstrate sorting algorithms, I’m not going to worry about the best way to sort books, but just worry about sorting in computing systems.

And, there are exceptions, but in general, the best way to sort given fixed sized keys is radix sort. Which lots of folks here have mentioned. But, given arbitrary length keys, the best way to sort is usually merge sort. In particular, the natural merge variant:

  • Split your pile into partitions, where each partition is sorted.
  • If there’s only one partition, you are done.
  • Now merge adjacent partitions.
  • Repeat

O(N*logN) worst case, and significantly better best case. In particular, with nearly sorted inputs, quicksort is going to be O(N^2). Merge sort O(N).

And, this case happens a lot. Imagine I have one huge pile of books that was previously sorted, call it “the pile of books in the library”. Imagine a second small pile of books that were returned by users today. Quicksort would be a trainwreck. Mergesort handles it effortlessly.

Now, let’s say I have a huge queue of randomly ordered books, in a line. Well, to quick sort it, I have to run all over the place in the queue. I spend a lot of time running from place to place. Quicksort is kind of like that with pages of memory and cache lines. Now, merge sort alternates, taking one pass through the queue splitting into partitions, and then merging. for both phases, I’m checking books in order. So, I can put them onto conveyer belts.

I need three belts. Call them “big queue of books”, “partition 1”, and “partition 2”. For the partition phase, I take books off the big belt, and move the to the partition 1 belt while they are in order. Then I move them from the big belt, to the partiotion 2 belt. I now have two partitions. So, I now go through the two belts of partitions, moving the lexicographically smaller to the end of the big belt. Two for the incoming partitions, one for the merged result. As the librarian, I just look at the beginning/end of each belt, in order. No running around. Okay, so I lied, this does actually work for books.

More importantly, it works for data processed by computers. I can sort data that is far larger than my computer’s memory. I can do it requiring 3 pages of working set, and 3 active cache lines. I can even parallelize the work among arbitrarily many computers, scaling nearly linearly with the number of nodes.

Hell, it’s even simpler to explain how merge sort works.

Merge sort beats the snot out of quicksort in almost every way. So why do we continue to tell the myth of quicksort …

2 Likes

Fastest way to alphabetize 1,000+ books? Dump the problem on a librarian, done!

3 Likes

Because mergesort requires more memory (linear with size of the array to be sorted) and quicksort has better cache locality. Quicksort’s worst case is indeed n^2, but it only gets close to that when you choose bad pivots. (You can choose pivots randomly to get better average speed.) Most people use quicksort because it’s usually faster (especially when you don’t know the composition of the data ahead of time).

Additionally, most C stdlibs use “introsort” which detects sorted data (by switching to a merge sort) to prevent unnecessary recursion.

1 Like

Thats good wierdly I only have books about xylem and xylophones.

1 Like

So going back to the narnia question is the last chapter of the lion the witch and the wardrobe cut off the book so a horse and his boy can be inserted between that and the rest of the book?

Sometimes you just have to abandon logic, know your books and where to reach for them.

3 Likes

Well yes with a computer, 1,000 items take a trivial time to sort. But of course many index files are several orders of magnitude bigger. The keyword files created when you index the full text of documents for instance.

The external mind palace method.

2 Likes

I shall try to remember that.

3 Likes

2 Likes