Master the AI technology behind Siri and Alexa for just $39

The idea of using cloud voice recognition still seems odd … like building a national laundromat and everyone Fedexing their dirty washing to it.

Voice recognition on PCs seemed to stall in the late 90s. I suspected that it was a small cartel of companies with patents behind the slow pace while they were selling company-level solutions. It never made sense that it didn’t keep pace with processor improvements from 200 MHz single-core Pentiums with 128M ram to GHz multi-core CPUs with 16G+. Using the cloud to give a uniform experience on small devices sounds good at first, but each one of those “small devices” are pretty powerful by 90s standards. (Okay, the Echo is dumber than I thought, but it’s not like it doing anything else and the next gen will be GHz quad-core because that’ll be the SoC jellybean to use.)

Perhaps the course will answer this simple question: How big should my house cloud be to do all my voice recognition locally?

I mean, there’s nothing magically about cloud resources, it’s not a Giant Brain like a positronic Multivac. It’s a chunk of processing and memory resources down a long pipe. So a matrix of local SoCs should do it easy, right?

1 Like