A machine-learning wishlist for hardware designers

Originally published at: https://boingboing.net/2019/04/16/8-bits-are-enough.html

Very interesting.

I know nothing about modern machine learning, so I just assumed that the learning step (model creation) was the most compute intensive, and that running some data through a model on the consumer end was just a bunch of quick linear algebra that could be farmed out to the simd co-processors or whatever.

I’m just happy that smarter people than I are working on the solution.

1 Like

Alternatively, the team may direct users towards using ‘blessed’ models that will run entirely on the accelerator, avoiding any of the tricky custom operations.

That’s all well and good once you get to the midgame, when you can make an arbitrarily large holy water stack by diluting potions in the fountains you’ll have found by then (e.g., in the Oracle room). But I’m not sure how low-level users are going to come up with a blessed model except by blind luck or savescumming.

look, I do the best I can with the material you give me

I suspect that these carry significant limitations (quite likely beyond not being reconfigurable as presently implemented); but there is something about the neural network implemented as a static stack of optical components that I find compellingly elegant.

If it is useful for some real world stuff I imagine that it would also miniaturize quite well if one moved from 3d printing plastic to wafer witchcraft or precision film deposition.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.