Image resolution

The resolution of a digital camera is often limited by the camera sensor (usually a charge-coupled device or CCD chip) that turns light into discrete signals, replacing the job of film in traditional photography. The sensor is made up of millions of "buckets" that collect charge in response to light. Generally, these buckets respond to only a narrow range of light wavelengths, due to a color filter over each. Each one of these buckets is called a pixel, and a demosaicing/interpolation algorithm is needed to turn the image with only one wavelength range per pixel into an RGB image where each pixel is three numbers to represent a complete color.

The one attribute most commonly compared on cameras is the pixel count. Due to the ever increasing sizes of sensors, the pixel count is into the millions, and using the SI prefix of mega- (which means 1 million) the pixel counts are given in megapixels. For example, an 8.0 megapixel camera has 8.0 million pixels.

The pixel count alone is commonly presumed to indicate the resolution of a camera, but this is a misconception. There are several other factors that impact a sensor's resolution. Some of these factors include sensor size, lens quality, and the organization of the pixels (for example, a monochrome camera without a Bayer filter mosaic has a higher resolution than a typical color camera). Many digital compact cameras are criticized for having excessive pixels, in that the sensors can be so small that the resolution of the sensor is greater than the lens could possibly deliver.

As the technology has improved, costs have decreased dramatically. Measuring the "pixels per dollar" as a basic measure of value for a digital camera, there has been a continuous and steady increase in the number of pixels each dollar buys in a new camera consistent with the principles of Moore's Law. This predictability of camera prices was first presented in 1998 at the Australian PMA DIMA conference by Barry Hendy and since referred to as "Hendy's Law".