How search engines make us feel smarter than we really are

Then how do you explain memories?

Since when information storage requires discrete units? Even the digital storage systems use fuzzy noisy analog as a backend, and the “discrete units” is a hack, sometimes crude, to convert imperfect, non-discrete signal to well-defined ones and zeroes.

Everything can store information. Look at forensic analysis. Or at paleontology; you can find what a given person ate at what stage of life from stable isotope analysis of hair and layers of bones and teeth. (And if you’d be able to control the person’s food intake, you could encode a few bits of data that way. Though there are better ways.)

Why do you think information storage is only digital, only discrete?

Memories are manufactured by the brain itself. They are analogies loosely based upon the brain’s own interpretation of stimulus.

Aren’t you contradicting yourself when you say that non-discrete signals are imperfect? It is merely a different kind of signal domain. I’d say that “since when” is whenever the stored data is going to be processed in binary. At some stage, it is going from one domain to being approximated in another.

I think that’s the difference between data and information. Like “knowledge”, data become information only when interpreted by somebody. Everything is data, but not intentional communication. It is this process which distinguishes the information of a signal from the noise.

Because “storage” means that the data exists separately from its substrate, which is true of programmable binary systems. The condition of the system can be abstracted as a state, which can be stored, examined, halted, reloaded, etc. In analog, all you can store is a static snapshot. The substrate is the signal, which does not exist separately, unless we want to get extremely reductive. We can measure the activity of thought and memory, but even recording it does not comprise a functional model of the process.

I don’t think it’s a big deal, really. But many people seem to get worked up about being deeply invested in the idea of making “easy” binary representations of extremely nonlinear, recursive processes without saying why. I don’t doubt that people can benefit from external memory, but it seems obviously more practical to shape the tools to accommodate cognition, rather than vice-versa.

Okay. Where are they stored if brains “are not storage at all”?

No. Analog signals are subject to errors introduced from left and right. Digital signals, in turn, tend to have thresholds - in TTL logic, zero is anything below 0.7V and one is anything above 2.5V. The space in between is undefined, a “hazardous state”; manufacturing tolerances of the chips make the actual thresholds a little different for each die and they also move with temperature and aging and radiation damage. So you have some pretty good tolerance to analog errors (and let’s not forget, in actual physical implementation every signal is analog in nature; keep this in mind when chasing weird errors).

Digital is a more practical signal domain to work with. You don’t have to bother with a billion of things that plague the analog world, at least if you aren’t pushing it too hard.

Since it reaches the boundary of a digital circuit; whether via some analog decoder (for magnetic-stored signals), or an ADC chip which quantizes it, or a Schottky gate or a comparator. So yes.

The actual threshold where you go from analog to digital or back depends on what is easiest/cheapest/otherwise best for a given implementation. See software-defined radio - the analog-digital boundary was pushed right to the baseband, and all the further signal processing happens in digital domain - with its disadvantages (needs fast computation instead of just a few parts for a simple demodulator) and advantages (way more flexible, allows for exotic demodulation schemes that’d be a nightmare to implement in analog).

Handwaving philosophy. The information is stored in place whether somebody looks at it or not. The ice will have its layers there, containing the data about the climate of the past, regardless if somebody comes in and drills a core sample and analyzes it.

Storage can have multiple forms. Analog storage, for example, in the form of orientation of magnetic domains on a cassette tape. Or the shape of a groove on a LP record.

In extremo, we can also consider signal paths themselves as a temporary FIFO storage buffer, because light is a lazy bitch. For homework, estimate how much data can fit into a 100-meter gigabit Ethernet cable.

The data are always bound to some substrate. It can be the magnetic needles on a mylar foil, a groove pressed into plastic, distribution patterns of ink on paper, number of electrons injected into a floating gate, state of a flip-flop, or an electromagnetic wave or a bunch of photons in a fiber or even in free space. You can transfer them from one to another quite wildly but you cannot decouple them completely.

In digital, what you get is also a static snapshot. Just better determined because you got finite quantization of the states.

You get the same in analog, except the states are finer-grained. Enter the chaos theory.

…and you can get a pretty good highly deterministic chaos even in the digital domain.

Because digital tech is easier to manufacture. The processes are more tolerant to slight divergences. Making couple tens of thousand gates that work with the wide tolerances of the digital domain is easier than making a single highly linear, low-noise amplifier. See the insane prices of even simple op-amps once you get to the higher end of the specs, where the parameters of every single die have to be individually trimmed by lasers.

What about going the middle ground? Do most of the processing in easy digital and the difficult analog do just for the interfacing?

After all, neural systems are fairly digital too - the neurons sit there until the sum of the inputs gets above a threshold, simply said, and then fire an impulse (and then sit idle for a while, recharging the membrane potential and unable to react to further stimuli) - in a way it is a digital pulse code.

I’ve considered building an NMR, of all things… I almost feel like it’s easier. There is open source NMR fourier transform software out there. The cryogenics is the hardest and most expensive part, but the rest of the machinery is pretty minimal in terms of moving parts. The big challenge is really the getting the magnetic field strength where it needs to be. The radio frequency generator and detector shouldn’t be that hard either. I’m not saying it would be a good, high resolution machine. Maybe only a 60Hz machine. I dunno, I’ve only just started my journey into physical chem. I’m probably talking out of my ass.

This is something I’ve finally learned to do. There’s gold in them there hills.

I cannot do this and finish the book. Too distracting. I find myself taking notes and making a list of things to search later. Reading James Mahaffey’s book on nuclear accidents would have been an ordeal otherwise.

True, but (and I’m getting off topic here) I’ve been thinking about desert island knowledge lately. Or prison knowledge. I feel like the future is going to be about information in a way it never has been before, and I wonder if we’re going to create classes of people based on the kinds of information we allow them to access. It may come down to being able to memorize the information while you have limited access to it. I already can’t print out segments of books from Amazon that I paid for unless I unlock them with third party software. The way the dual trends of copyright and regulatory capture have been going, I’m not convinced that this bounty will last.

Was thinking about the same, too.

The cryogenics could be rigged from a Stirling cryocooler. That thing on its own could be a pretty nice opensource project.

What about a Halbach array of neodymium magnets?

Here we can leverage either some software-defined radio board (maybe even the RTL-SDR could work here? Would 2.5 megasamples/second on the baseband be enough?) or a digital oscilloscope that we could trigger on demand and then download its memory for analysis.

That’s one fun book, I managed to score and read it couple months ago. And yes, it is slowing down. But a book that’s worth reading will suck the attention back.

In turn, I was thinking about low-power high-capacity easy-to-duplicate library systems. For desert islands or post-apocalypse settings.

(Side thought. Desert island… isn’t that called “oasis”? :stuck_out_tongue: )

“information we allow to access”? That sounds quite dystopian. I’d say such approaches have to be fought in a decentralized way, using simple off-the-shelf technologies, and massive civil disobedience.

That’s certainly handy for liberating data from classified sources.

That’s what the third-party software is for. Alternative is scanning the reader screen.

The more restrictions there will be, the fewer people will be willing to obey. I grew up in the Age Without the Internet; there were restricted-access data, avalability of even simple copying machines was grossly limited, but still there was samizdat available, and magnitizdat for audiobook versions, and copies of banned music gramophone LPs made on old xray film…

Even if the Net is fully monitored, with today’s miniaturization of electronics the physical flows of data will be VERY difficult to monitor.

Stasi did not have much success with samizdat. KGB failed. Other services failed. Today they have better toys, but we have 'em too.

1 Like

I’ve considered the possibility of suspending fine paramagnetic or ferromagnetic material in the sample (or a solid core- which is probably more realistic) to increase the magnetic permeability. I think that might create a strength of materials issue, but I’ve been speculating on it lately. I just haven’t had the time with all of my studies to actually sit down and do the math to see if it would give the kind of Tesla boost I’m looking for.

1 Like

This article is telling me that I’m being made less smart because I don’t know things anymore thanks to Google. Let me tell you, I didn’t know things way before Google and I seem to have done just fine with not knowing things. Don’t we nod along with Socrates/Plato when he says that he is wise because he at least knows that he knows nothing?

The only “damning” part is the idea that people think they are still smart when they don’t have access to Google. But maybe they are still smart when they don’t have access to Google. Less space for knowing facts, more space for thinking things through. I’m pretty sure the brain still has the same number of neurons, are all of those neurons that would have been storing the average length and weight of a Moose just sitting back and having a beer instead?

8 Likes

Come on McRaney. You long ago proved I am not so smart. Quit rubbing it in.

trust me on this one, you only think you want to watch it. It’s one of the few movies Ive walked out of, not because I was offended, but because there had to be a better use of my time. If you take the trouble to own this film, I predict you’ll regret it.

This research seems to parallel that done with rich people who habitially underestimate how important their families and their connections are to their wealth. They also overestimate their business acumen in the same way.

Im fascinated to imagine what it would look lime if we could outgrow these biases both with money and with information.

I waited for it on TV and quite loved it. The over-the-top cyberpunkish industrialish design is worth it on its own.

I think there is a parallel there, but I think in the parallel the wealth is analogous to the intelligence. That is, contrary to the headline, I think we probably actually are smarter because of search engines, just like rich people are actually rich. We’re almost certain mistaken about why, though. I think we have a bias towards thinking that intelligence is something that happens entirely in our heads, and that that world view requires a lot of convolusions that just accepting that some part of our thinking take place in the world around us doesn’t.

2 Likes

That’s exactly what I meant, I agree that the headline misses the point. Ultimately, the only real way to gauge intelligence in a system, is to look at outcomes. The internet likely make us smarter in the aggregate. More clever, anyway. It remains to be seen if it makes us any wiser. I have my doubts. Back when I was taking computer science, much was made of the difference between information vs knowledge. I kind of wish that distinction hadnt gotten lost somewhere.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.