How search engines make us feel smarter than we really are

Is it? Really? I own it, somewhere. 4500 miles away. Haven’t read it in forever…

I should go back and reread old Gibson. But I have loads of stuff I haven’t read.

3 Likes

Not just recall, please - ability to read (okay, download and index) books in couple milliseconds per page, with perfect recall and perfect association. Possibly ability to query other computers without having to bother with user interface.

Where is the nearest illegal neuroclinic that provides such mods?

With such tech, who would care about philosophy?

3 Likes

Is it? I can’t really remember… twas a long time ago. But as crass suggestions go, isn’t emploring someone to ‘read the book’ right at the top of the list?

See @shaddack s comments below (above?) about hoovering up text. :slight_smile:

1 Like

with such tech, wouldn’t the philosophical discussions concerning it’s usage be much more interesting?

Even with such tech, your life is of limited length. Do you want to spend it talking, or making stuff?

If you try to think about the “soft” things too much, you end up with a lot of contradicting requirements, a need to find something to assign weight coefficients to the requirements by some criteria. You get a smorgasbord of philosophies signed by big names that will provide you with many different readymade frameworks, which at the end leads you to select the one you like the most anyway, so just do what you wanted to do at the beginning and save yourself the effort with the big names. A quagmire of lost time lies there. Better spend it in the lab.

1 Like

Was posting this as an edit above but here it is:

Like, I think most people assume there would end up being some kind of a divide between the wetware (Johhny!) and the hardware. So if the hardware got turned off or damaged or something, you would be left back at square one, but surely the brain matter would be processing all this information anyway. Would not the information percolate through into the brain matter, maybe even more efficiently than regular memory storage?

I guess that would be down to the interlinking technology. Memory storage appears to be distributed throughout the brain. And perhaps even reconstructible after damage given proper therapy. So we are probably a long way off from being able to properly interface the external memory with our actual memory storing functions in the brain. I imagine we’ll get around this by using some kind of natural language interface that interacts with particular regions using impulses that we learn to interpret but it sure would be nice if we could just download the memory in the exact form it would be processed/stored as.


And in answer to your question, I think such discussions, especially those elevated by the advent of such technology would allow us to encompass the idiomatic penetration of the technology and to dream up even more, newer ways of interacting with the technology.

Just because we can’t imagine what such philosophy would look like doesn’t mean that it wouldn’t be that much more efficacious and lead to more advanced ideas and interpenetration of the technology with the biology. Thinking doesn’t have to stop just because you get smarter.

Not necessarily. The dividing line can be pretty blurred. The divide is assumed to be something like electrode arrays - but it can as well be nanochips (see for example molecular electronics) using LEDs or lasers to interact with neurons gene-modded to be light-sensitive (a nice modern technique that gets rid of messy electrodes and issues with them) and the chips can be even hosted within the neurons themselves as artificial organelles. Power can be delivered either by on-chip glucose burning artificial enzymes (see glucose fuel cells), or by e.g. light, using infrared lasers; these can also double for cross-skull communication, eliminating the need for a dataport and the associated skin penetration. The tiny size of the chips and lack of use of microwave-range electromagnetic waves makes them hardened against EMP (and EMI) from the outside.

If sufficiently distributed through the brain, may require a complex cascade failure for a significant damage. The systems have to be as redundant as possible.

That’s the point!

Yup.

And if we get familiar enough with the process of the storage, we may even inject specific memories. E.g. a memorized book. There are savant-class people with such abilities. They could be copied.

Depends. Sometimes what looks complex may be pretty simple. (Sometimes not. More research needed.)

Why natural language? A query language with some structure may be more efficient; natural languages are full of ambiguities, and one global language is more compatible than having to maintain several ones. Maybe for low-level, initial systems I’d use a small handful of natural languages to choose from, but steer the users quickly to the one. Which may be English-based dialect, or some constructed language with a logical structure. I have a hunch that it’d be faster for the user to deal with, once mastered, and provide less load to the computer back-end.

Assuming the self-embedded nanochip arrays, we can potentially simulate this. If all the chips can communicate with the Outside (or with an in-skull regular-architecture computer), they can simulate the whole-brain patterns of memories being retrieved and of the associations and inject the ones from said Outside. Same for memory storage, sense the brain-induced patterns and report them to the host computer.

For superficial coffee talk, yes. These toy-thinking sessions can yield some interesting ideas while not eating much time. The return on investment gets progressively lower as you get into Deeper Philosophy.

1 Like

Yeah and we would have to learn to interpret those impulses into memories, whereas if we understood how memory is processed and stored in the brain we could bypass the interpretation step and just insert them whole and already perfectly formed. It would be the difference between very quickly listening to a stream of information, like speed reading and just recalling a memory from an external source.

Unless it was hacked and turned off by… Oh I don;t know, criminals or criminally religious people who believed it was against god’s plan or some such nonsense.

I don’t disagree that the interaction with very fast impulses would be faster than reading or talking or whatever, but see my point about natural memory formation/storage/recall. Surely a technology that leveraged a completely natural method would be even faster.

I’m fairly sure those abilities are down to concentration of resources. Really, if everyone was able to turn off or ignore prevalent functionality in favour of one particular function, I think we all could do these things. I’ve talked at length elsewhere about how I feel like psychedelics do this for my own memory recall. You could be experimenting with this kind of stuff right now… or… y’know, however long it would take you to learn to make LSD or grow some mushrooms.

I’m assuming that people would want to learn things that aren’t always technical in nature. I imagine that there are such things as the poetry of programming etc but if we are to maintain a cultural, memetic aspect to the enterprise I think that the vaguries of natural language and the inherent condensing function of poetry and philosophy would maintain that cultural aspect. But I am not against the implementation of code-like language as well. And I imagine that learning such code-logic and extrapolating and developing hybrids of the two spheres would be an interesting avenue of research. I’m a big fan of logic, the ability to analyse natural language using logical tools would probably be the base from which such a technology/process would develop. :smile:

I hope so, but first we need to know what to model. Memory functionality is so poorly understood.

Hey! Toy-models are important! Pretty much every naive-optimist the world over walks around with a version of consciousness that embodies such a description.

Perhaps I have a different view of philosophy than you. I always go back to my favourite hook in these matters. Philosophy is not for answering questions, it’s for knowing what questions to ask!

Science answers.

When learning about NMR, I spent a lot of time looking up spectra online and just learning to read those. Unfortunately, while I learned the shapes and patterns, I didn’t spend enough time memorizing ppm values and that eventually came to bite me in the ass come exam time when all I had were tables of peak data and no graphs. But I walked into it very confident. I ended up resorting to flashcards learn to read spectra.

I realize exams aren’t always a reflection of how things are done in life, but the overconfidence was definitely Internet-based, and it definitely hurt.

Yes. Which is why I proposed the download/indexing earlier.

Or a govt that doesn’t want such tech on the streets. Or whoever. (Todo: read Amped.)

An externally induced cascade failure is still a cascade failure. Has to be taken in account during the design phase.

Depends. If it models neurons, maybe. If it relies on them, then hell no - those wet bags of cytoplasm are slower than molasses. A major bottleneck in processing that has to be designed around, and more and more functionality gradually shifted into silicon (or carbon nanotubes or nanoelectronics or whatever).

So provide more resources!

With limited resources, definitely. So a way to add more is desperately needed.

Todo…

I would call it “baggage” instead of “condensing function”; it adds ambiguities and poetry in particular is a rather wordy and difficult way to get concepts through. Maybe my brain is poetry-incompatible… can do (and sometime likes) simple verses but the “higher-art” looks like a word jumble. Maybe a different concentration of resources?

I’d start with natural language for simple queries, constructed language for the advanced ones, and have a gradual way to learn. Low barrier to entry, and quick rewards for learning more.

Which we will see. More research needed. More research in imaging and interfacing tools needed. Focus at the tools today, as it will speed up the work tomorrow.

It however seems to lack the part about when to stop asking, when it is clear that there will be no clear answer, and just start doing.

Good old NMR! Memories… Was it 1H, 13C, both, others?
NMR is easy-ish; IR is a messy nightmare in comparison.

Ouch! Well, that happens. (Could you draw the peaks from the tables? Make yourself the graph? Shouldn’t be that difficult - instead of those spiky lines you’d get something that looks rather like a mass spectrometry result but could get the outline drawn over for easier visual recognizing.)

The same confidence can be based on paper resources. I have similar experiences, of smaller scale, from the Age Before the Internet, and it illustrates the need of having more than one book from more than one author that works on the topic in question in more than one style.

3 Likes

Hehe, I sensed you might be going that way. Obviously proper cyber-brain technology would be the endgame, but hopefully we can start with the external memory whilst we’re still locked into the wet stuff.

Yes!

Weird. I love IR. I can read IR in my sleep and I was one of the few students in my class who learned to read the fingerprint region.

It was DEPT 13C NMR. It didn’t help that we were doing 1H NMR for a while first and 13C NMR basically puts the same functionality in the same places. It’s downfield and the ppm intervals are larger but if you overlay 1H spectra on it, the functionality appears in the same relative zones. This was my downfall, I didn’t even really know the scale to draw the graph. 1H NMR had spoiled me there. Plus panic.

The first three results from Google called. They want you to know they’re all anyone uses. :smile:
My latest learning philosophy is to learn to remember details. I’ve discovered that I’m miles ahead of my peers in synthesis and analysis of ideas (and I don’t think I’m that smart, so it’s kind of scary) but that I have a brain like a sieve when it comes to long-term storage of details. (I suck at remembering song lyrics too.) It’s a real weakness when it comes to the issue of fluency in a topic. The Internet is so portable, your fluency becomes deceptive. I think a lot of the insightful discussion on this board would disappear if people couldn’t look things up on the Internet as they start to formulate ideas around a new topic or issue. Again, that ability to fake fluency is novel to the medium. You have your phone on you all the time. You can only have so many books on hand in an ambush.

Make me feel smarter than I actually am? Why that’s like trying to add 1 to infinity :smiley:

Kidding…jk…

1 Like

Hence start with the nanochips as artificial organelles for the neurons. Use that approach first for research, then for the early stages of interfacing.

(Thought… what about using artificial mitochondria, so ATP could be generated not just by burning glucose but also by e.g. light? Could have interesting applications in e.g. short-to-mid term brain tissue preservation, or lowering the requirements for blood flow which could counteract the increased energy demands by forcing the activity higher.)

I wish the components for the spectrometers were cheaper… Recently I stumbled over a two-photon polymerization (2PP) of photoresins, which has about 100nm accuracy, which is enough for printing of lenses and mirrors and diffraction gratings - quarter of the wavelength of blue (okay, violet) light, which could be enough, and would be definitely enough for the infrared region. Cheap optical components -> much cheaper spectrometers.

Ouch. Though if you are used to the shapes, a scale-free graph could also do the job?

Panic sucks.

I don’t! I routinely go to 50 or sometimes even 100 deep. :stuck_out_tongue: That’s where the interesting crap is!

(…unless I get just a few links, all to paywalled papers. Growl.)

My highschool teachers were complaining that I am pulling into the lessons too many of too fine details. I was complaining back that without them it is more difficult to remember as then it is a heap of factoids without some sense holding it together.

The details are what the internet is for.

The internalized knowledge is for being aware of them, and being able to fire the right query to get them.

Not always. It gives you more data for less effort. Think of the Internet as partially a swap partition for the brain.

Without having the structure of the problematics in your head, the understanding, the internet is just a lot of texts. The structure allows you to quickly zero to the part you need, when you need it.

I often read books with a laptop nearby (or on the laptop itself), firing off queries when something tickles my interest. (Or taking notes, often to to wikipedia; just few hours ago it was the use of the NaK alloy in the SLAM missile as a hydraulic fluid.)

And it’d be less interesting.

It usually shows up quickly if there’s real understanding with just the facts being aided with, or if the other side is using prefabricated arguments from the Net. At least in sci/tech it is easy-ish if you have at least partial understanding.

A lot of books can fit on a single 3.5-inch 8TB disk, and the solid-state capacities are growing fast and could offer even more spatial density. A decent topic-specific library could be concentrated even to a larger SDXC card.

If you feel smart, you are usually dumb. :stuck_out_tongue:

Usually, the more you know the less certain you are about things in general. (Except some very basics in specific contexts, e.g. the energy conservation in macroscopic space/time scales.)

The kind of idiot who does not know everything about their own mind. Which, thanks to Godel’s theorem, we know includes all of us.

You’ve missed the point so utterly that I wonder if you listened to the podcast at all. It’s not that people explicitly think they know everything because they can look it up; it’s that they think they know more than they do because when they’ve had a question in the past, they’ve then found an answer, but have forgotten how often that finding the answer was done via the internet. And it’s not a new phenomenon: people in the past also thought they knew more than they did because they could ask someone else or look it up in their own books or at the library.

Why do we seem to insist on the artificial limitation of “knowing” to only the subset of information stored in brain? That is, to add insult to injury, inherently unreliable storage platform.

These days you have an information overload; lots and lots of data at your fingertips. What you “know” should be counted from not just what you remember but from the sum of what is in your head and what you can reasonably quickly (and reliably) “swap in” from the external knowledge stores.

Because of the difficulty of measuring the use of external memory in a way that distinguishes one individual from another in a way that can be reduced to a single number, I’d guess.

People are 99.999% the same person. They only assert that they are distinct because they are programmed to be egotistical, not because it reflects reality.

1 Like

Brains are not unreliable storage - they are not storage at all. I find that people who use digital tech a lot often try to apply its models to brains with a disregard for how brains actually work, which seems weird. There are no discrete units to read nor write in brains, so there is no “storage” of any arbitrary state. But there is plenty of tech which works like that for those who like it. Still, doing so does nothing to change their own cognition.