Whenever I help friends and family buy a new camera, they almost always turn to pixels as the dominating trade point. The reality is, that’s probably not the most appropriate measure of “bestness” and here’s why:
The metric most often used by camera manufacturers and marketers to tout their products has been pixel count. That’s a shame, but it was probably inevitable — it’s easy to measure, and consumers are used to the idea that more is better. However, the number of pixels is a measure of quantity, not quality.
This is a great article explaining in a mostly non-technical way why pixels aren’t all they’re cracked up to be.
Case in point: I can (and have) print a 30″ x 20″ from my eight-year-old 6.1 MP Nikon D70 that look great because it has a 23.7 mm Ã— 15.6 mm1 sensor. If I were to print a picture at the same size using my year-old iPhone 4S with its 8 MP 4.54 mm x 3.42 mm2 sensor, it would look very noisy.
What the heck is going on here? I took a picture in burst mode (three consecutive pictures taken at 3fps). The first picture has some weird green blog in it (click on either image to embiggen):
…and the very next shot, the green blob is gone…
…thoughts? I was shooting with my 4 year old Nikon D70 with 18-70mm Nikkor Lens without a hood. Could it be an artifact from the sun over saturating the sensor?
The concert at EDay’s provided some great lighting (I’ve been reading Strobist lately, a blog about off camera flash lighting), but I had some issues when the red lights came on.
The red light tended to really wash out the image, almost light a bright white light. I did some research and I guess that the CCD sensor is particularly sensitive to red light. There is supposed to be a “built-in, low-pass, long-wavelength cutoff filter in front of the sensor” (Source), but it doesn’t appear to work very well. Not that I’m complaining, it’s just something interesting I’m noting.