Software for using your head movements to control your cursor

Originally published at:

1 Like

As a software developer, I appreciate what he’s doing. But I’m wondering how long it will take the MIDI association to contact him about the name, now that he’s got this level of exposure.


I’m sure chiropractors and massage therapists would be very happy to see this gain in popularity. Like voice navigation, it just tends to shift the strain to another part of the body. I also wonder about undo functions, to avoid actions happening when the user’s facial expression is just a response to something on the screen. :nerd_face:


Oh wow this is soooooo cool, thanks for writing about this!

I started working on this in 2018 while homeless in order to help a resident at a shelter who was there recovering from a stroke use the web handsfree (his family lived in Texas and he had no real way of communicating to them without a translator, since he couldn’t type or speak clearly). I’ve had a pretty intense journey since then, including a visit by Google PAIR at the shelter and a 2 week residency at Carnegie Mellon (I didn’t even have a coat at the time when I was flown in during the frosty winter there haha)

My goal right now is to discover a sort of framework or “common language” for interacting with devices handsfree, particularly for creative and accessibility purposes. A lot of ideas are borrowed from VR/WebXR, only using a computer screen or projector instead of a headset. I’m currently focused on the desktop/browser but I’ve also done experiments with robots and drones too!

Anyways as a developer it’s a super cool feeling to see someone write about your work, and I really appreciate it!

@johnawerner oh this is a huge bummer but I’m grateful you brought this up :pray: I had not realized that the acronym for the MIDI protocol was trademarked. Fortunately it’s early enough that I can still pivot away from the name. The domain itself cost me $85/mo since I’m leasing it so it would be good to save money anyways haha

@PsiPhiGrrrl These are great points! Most actions are registered by averaging out things over a few frames to account for errors, but I use a smile gesture to click and sometimes it’s so fun to use that it’s hard not to smile :slight_smile:


Have you considered adding sounds as input? Not speech recognition, but simple sounds like clicks, pops?

I recall reading a crappy book about building a robot; this was the '70s, so all the processor could handle was 4 different sounds; if memory serves, you could get 15 different commands by stringing together these 4 sounds.

Looking forward to your progress!


Oh that’s a cool idea! I hadn’t considered that yet but it’s important to consider because not everyone has speech that is recognizable by a computer. I made a note of this, thanks

Also I had no idea that there were robots in the '70’s, much less books on how to build them haha that’s really fascinating!


Pretty sure the book was from a “book club”; I was a sucker for book clubs when I was a teen. Either a TAB Book or McGraw Hill. Pretty sure the former. Sadly, it’s long gone.

Given the available technology, using the simple mouth sounds was probably the only possiblity. It seemed brilliant to me. I’ve tried to suggest a similar idea for my Echo Dots, but no one from Amazon has acknowleged me yet. :frowning:

You’re making me feel old!

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.