Master the AI technology behind Siri and Alexa for just $39

Originally published at:

I like the idea of an Alexa or a Cortana just hanging around listening when I call out… but I don’t like the fact that a device I don’t truly control is just sitting around listening to me.

I’ve seen some home brew attempts at this, but nothing I really think meets my needs.

If I had time and thought this was worth the $39 I might pony up for this.

1 Like

A free software personal assistant is crucial to preserving users’ control over their technology and data while still giving them the benefits such software has for many. This may be a particularly challenging project for free software developers, as most personal assistant software demands access to users’ calendar, location data, and more, and they often incorporates image, speech, and natural language processing services – but many free software users identified this project as one that is important to them.


“Master the AI technology behind Siri and Alexa for just $39” = “Master the theory behind quantum mechanics for just $39”. I mean, c’mon.

1 Like

Exactly! It’s an amazing deal!



I like the way you think. We have a lot in common…


Master Keynesian economics for just $39 (while supply lasts)

Mastering pyramid schemes, $99 or just $39 if you sign up 10 friends


Hey Siri, tell Alexa to order a dollhouse


The idea of using cloud voice recognition still seems odd … like building a national laundromat and everyone Fedexing their dirty washing to it.

Voice recognition on PCs seemed to stall in the late 90s. I suspected that it was a small cartel of companies with patents behind the slow pace while they were selling company-level solutions. It never made sense that it didn’t keep pace with processor improvements from 200 MHz single-core Pentiums with 128M ram to GHz multi-core CPUs with 16G+. Using the cloud to give a uniform experience on small devices sounds good at first, but each one of those “small devices” are pretty powerful by 90s standards. (Okay, the Echo is dumber than I thought, but it’s not like it doing anything else and the next gen will be GHz quad-core because that’ll be the SoC jellybean to use.)

Perhaps the course will answer this simple question: How big should my house cloud be to do all my voice recognition locally?

I mean, there’s nothing magically about cloud resources, it’s not a Giant Brain like a positronic Multivac. It’s a chunk of processing and memory resources down a long pipe. So a matrix of local SoCs should do it easy, right?

1 Like

the query only takes a moment, but probably needs a ton of memory and CPU for that moment. The Alexa device spends 99.999% of its time idle, so it is enormously more efficient to route the queries to a timeshared central compute facility.

This topic was automatically closed after 5 days. New replies are no longer allowed.