I was fascinated by the previous gen of Sigma dp cameras... Note that there's also a dp1 and dp3 coming as well, with 19mm and 50mm fixed lenses, respectively, and the same sensor. It's a neat idea - not something I'd personally be interested in buying, but I like that they're stepping outside of the box a little bit.
Also, Foveon technology completely baffles me. I've tried reading up on it a couple times and every time my eyes just gloss over.
F2.8? That's as fast as they can make the lens? In 2001, the Olympus C2040 had an f1.8 lens (f2.6 at max telephoto), and 3:1 zoom to boot. It cost about $700 when it first came out. I had one, and made some rather nice pictures with it.
I'll think I'll pass on this one, thanks.
I'm amused by these cameras that surround Kodak Brownie size lenses with huge black plastic frames, so they look like the real thing - as long as you don't know any better.
I'd love something digital that I could lock my vintage Nikon F series lenses onto.
I used to own the first generation DP2, and although it was slow as shit and the display was all but useless, I loved the picture from that Foveon sensor. Colours (especially skin) were amazing, and sharpness was much better than a Bayer sensor although that first gen foveon-sensor only had about a 4MP effective resolution.
The lens is described as
Super Bright f/1.8-2.6 3x Zoom 7.1-21.3mm aspherical glass lens (equivalent to a 40-120mm lens on a 35mm camera).
A lens designer doesn't need an inordinately large amount of glass to make such a short focal length fast.
Sigma has a wide zoom (18-35mm) with constant f/1.8 aperture that came out this year
It's a lot easier to make a fast lens for a tiny sensor. There's a reason you don't see f/1.4 Medium Format lenses, let alone Large Format lenses.
You mean like any of the pro-level Nikon DSLRs?
I think I actually had a C2040. And let me tell you, f/1.8 meant very little on that camera. Its low light performance was HORRENDOUS. As others have mentioned, it's super easy to get f/1.8 when you've got a teeny tiny sensor (the sensor in the C2040 is a 1/2" sensor). It's all about physics. In order to get an f/1.8 lens on a sensor that size (edit, that size being the size of the sensor in the DP2), even fixed lens, you have to have a bigger lens. So they are sacrificing 1 1/3 stops in order to keep the lens smaller. For comparison, have a look at this chart:
The C2040 (1/2" sensor) fits right between the smallest green box at the bottom, and the second-smallest orange box. The last gen (Merrill) of Foveon sensor is actually bigger than Nikon's APS-C on that diagram it's (24x16mm, Nikon's APS-C is 23.5x15.6mm).
how vintage? Officially. the only Nikon camera supporting pre AI lens is the Nikon DF. However, the"entry level" Nikons will as well.You can also put an F lens on most other cameras with the appropriate adaptor ring.
The concept behind the "Foveon" is rather simple. Regular image sensors capture all light pretty much indiscriminately -- no matter what color. What makes it work is that they add color filters on top of each pixel. A "red" pixel grabs red light by throwing away any blue or green light that hits it. Effectively, you are throwing away 1/3 of the light using those filters.
Foveon sensors, on the other hand, work quite a bit differently. It takes advantage of the fact that different colors of light can penetrate the silicon differently. Red light goes through silicon better, so it will tend to go deeper. Blue will pretty much stop at the surface. So, each pixel captures ALL of the light that hits it, and the color is detected by how deep the photon goes.
This means that, all things being equal, a Foveon sensor will work just at well using 1/3 less light. Also, without the filters, you will have less of the moire patterns that regular sensors give you. The image results also feature an apparent resolution approximately three times better.
Really, I do not know why this type of sensor did not become more popular. The only down side that I can think of is that I expect that the sensor is significantly more expensive to make due to the 3D nature of getting different regions embedded at different depths in the silicon. That, and patents...
Just to clarify, by this he means most other non-Nikon cameras (Leica R being an exception, but you’re not buying an R body to use NIkkors, anyway).
For general compatibility between Nikon bodies and lenses: http://www.kenrockwell.com/nikon/compatibility-lens.htm. Basically almost any Nikon lens can be mounted on and digital body—and will meter on pro bodies—so long as that lens is AI or has been converted to AI. Only pre-AI lenses, made before 1977, will need to be converted.
I kinda understood that part, the three separate layers capturing different colours of light... What confuses me is how it's all combined. Like, the new one has a 20mp layer that captures blue... Or is it 4 separate 5 megapixel layers combined into one layer? And the red and green layers are only 5 megapixel? But it generates a 20 megapixel image? Oww... My brain.
As with other sensors they count the colors separately for the advertised pixel count.
That means that compared to a Bayer filter sensor with the same advertised megapixel count you have a lower number of "photosites" and nominal resolution, but you actually get a full pixel's worth of color information out of every photosite.
If you have a 12-megapixel sensor, then you will have 12 million red sensors, 12 million blue sensors, and 12 million green sensor. So, in effect, you have KIND OF a 36 MP sensor. Still, there are only technically 12 MP, so how you describe it is more of a marketing decision. Any way you slice it, a 12 MP Foveon will have a much better picture than a regular 12 MP CCD or CMOS sensor.
Hrmm, is that true, though? The Canon 60D is an 18 megapixel camera. It creates images with a max resolution of 5,184 x 3,456, which is 17,915,904 pixels. I would presume that means it actually has that many each of red, blue, and green photosites, right? Or am I wrong in my thinking about how Bayer filter sensors work, and it actually only has 6 million (roughly) of each colour's photosites?
About 4.5 million (18 million/4) each for red and blue and 9 million for green.
So then... How does it create individual colour information for 18 million pixels? Isn't each pixel of the photo a combination of one RGB photosite combination? Sorry for all the questions, clearly I is dum when it comes to this stuff.
I suppose I could just Google it either, but obviously my understanding of how Bayer sensors worked was WAY off.
Each three channel pixel in the finished image is interpolated from several single channel pixels. The exacts methods vary. And yes, that means that the final image contains substantially less information than what one might expect (advertised pixels, three color channels each.)
They simply advertise the highest figure they can get away with, just like the Powerball people.
Thanks! Today I learned something new!
Woah. You mean my new camera is only capable of 25 bits of color instead of 42 bits I was promised?