Let's start working on the assistive-tech applications then. Even without the glasses themselves we can write use cases, functional specs, design proposals. And by the time we're done (this stuff takes time), AR glasses of various kinds will be almost dirt-cheap.
I can see for example an application with speech recognition for hearing-impaired persons, providing text data for dialogues. The same can be used with added machine translation, for hearing users, possibly with text-recognition and translation overlay. The non-asshole theaters then could even provide the subtitles in a bazillion languages as a wireless streaming service. I would love annotation of the actors/characters to not lose track of who is who because suboptimal face recog capability sucks spheres in both social settings and watching movies.
The future is here. Trying to stop it is futile. Better resource allocation is in making it better.