Originally published at: This incredibly tiny camera is the size of a grain of salt | Boing Boing
…
Colonoscopy here we come. My deepest apologies…
“What is this? A camera for ants?”
Hang on - is this a (complete) camera or a lens (that can be used to make smaller yet better cameras)?
It’s nice to know we’re in spitting distance of invisible front-facing cameras on smartphones, but then again, haven’t we always been?
I presume they’ll round off those corners first
Cameras so tiny as to be invisible?
Yay?
Does moving the concerns of large numbers of clinically paranoid people from ‘delusional’ to 'well, there’s a minimum order quantity; but it’s definitely available" count as treating disease?
Because these will probably do a decent bit of that.
Just because one’s paranoid doesn’t mean there aren’t cameras everywhere recording the totality of our lives.
Time to invest in tin foil stocks.
From the phys.org article: “Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back,”
Kind of sounds like lens/processing are integral…?
I’ve read the article twice and the only thing I’m clear on is how the surface forms the lens, but not how the picture it takes eventually winds up being displayed/recorded.
“Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back,” So, creating the imagery using “convolutional neural network” (CNN) based network. 3D CNN for medical imagery.
From a piece on Deep Learning Microscopy:
“…a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field of view and depth of field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design.”
A “regular optical microscope” is now a grain of salt. IDK exactly how the image capture works but this feels close to how it might work.
If “surfaces as sensors” is where they are going it makes sense that a CNN based network might pull it all together.
I’m not sure they mean the actual back surface, but maybe? The latest image sensors from companies like Sony are doing a certain amount of work on the sensor itself, though I think it has more to do with focusing. The phase based focusing can determine from the current image if the lens needs to move closer or away to get to focus. Contrast based systems would normally need either some sort of distance measuring device to move towards focus or it would start moving in one direction and if contrast decreased it would move in the other direction.
The word ‘camera’ means ‘chamber’. Photographically, that chamber is the volume between a lens (focusing an image) and the film or sensor (capturing the image). Where is the chamber in this device?
sounds like it’s the space between the tip of the pylons and the base. many parallel non-equally sized chambers.
Is that in an optical path? My bodily cells possess many gaps, too, but they’re not rendering images AFAIK – wasn’t covered in my A&P (anatomy and physiology) classes.
tell that to your eyes i guess
im not a scientist on the project of course so take my interpretation worth a grain of salt ( which could be a very small camera i guess… hmm… )
This topic was automatically closed after 5 days. New replies are no longer allowed.