ECE516 Lab 2 ("lab2"), 2023 "What does a camera measure?"

Lab 1 aksed "What is a camera" whereas Lab 2 asks "What does a camera measure?"

That fundamental question about what a camera measures is what led S. Mann to invent HDR (High Dynamic Range) imaging in the 1970s/80s, and develop it further at MIT in the early 1990s.

Most research on image processing fails to fundamentally address the natural philosophy (physics) of the quantity that a camera measures, e.g. at the pixel level. In some sense the research gets lost in a "forest of pixels" and can't see the "trees" (individual pixels) for the "forest" (array of pixels).

You do a good job of studying forestry if you only look at forests and never individual trees. If we were to begin deeply understanding forests, we might wish to begin first by really undrestanding what a tree is.

Likewise we'll never really understand what an image or picture is if we can't first understand what a pixel (picture element) is.

Mann's philosophy that led to the invention of HDR was to regard a pixel as a light meter, much like a photographic light meter. Under this philosophy, a camera is regarded as an array of light meters, each one measuring light coming from a different specific direction. With that philosophy in mind, differently exposed pictures of the same subject matter are just different measurments of the same reality. These different measurements of the same reality can be combined to achieve a better and better estimate of the true physical quantity. This quantity is neither radiance, irradiance, or the like (i.e. it does not have a flat spectral response), nor is it lumminance or illuminance or the like (i.e. it does not match the photometric response of the human eye). It is something new, and specific to each kind of sensor. It is called the "photoquantity" or the "photoquantigraphic unit", and is usually denoted by the letter q. Often we'll regard the camera as an array of light meters, perhaps sampled on a 2-dimensional plane, thus giving us a array of photoquantities q(x,y), over continuous variables x and y, which may be sampled over a discrete array of samples, which we might write as q[x,y] using square brackets to denote the digital domain even as the range might remain analog, or at least "undigital" (Jenny Hall's take on "undigital"). A typical camera reports on some function of q, e.g. f(q(x,y)) or f(q[x,y]).

In order to really fundamentally understand what a camera measures, we're going to build a really simple 1-pixel camera. Since the camera has only 1 pixel, we won't be distracted by spatially varying quantities and thus we won't be distrated by the domain (x,y) or [x,y]. This will force us to focus on the range, q, and this range is in fact the domain of f. Thus our emphasis here is on dynamic range!

There are two parts to Lab 2:

There are 3 ways to make your lightmeter: Any suitable microcontroller... we'll use as case-study, ESP32 (WROOM) such as ESP-WROOM-32, ESP32-S with WiFi and Bluetooth which will make it easy to directly connect to the metaverse and extended metaverse.


Post your results to this Instructable (One-Pixel Camera...) by Wednesday Feb. 15th at 3pm and double-check that the post is present in the "I made it" list. Then present it on Thursday Feb. 16th 9am lab.

Post your results to this Instructable (One-pixel Camera...) on or before Wed. Feb. 15th at 3pm, and then bring to class Thu. Feb. 16th 9am for presentation..

Optional fun: you can compare with other data gathered in a previous year's lecture (link) and with the data gathered from the Blue Sky solar cell (link)

See also the Photocell Experiment. and the Instructable entitled Phenomenological Augmented Reality:

Prof. Wang's reference document
Kineveillance look at Figures 4, 5, and 6, and Equations 1 to 10.
• The concept of veillance flux (link);
• (optional reading Minsky, Kurzweil, and Mann, 2013);
• (optional reading Metavision);
• (optional reading Humanistic Intelligence, see around Figure 3 of this paper)
• (optional reading: If you like mathematics and physics, check out the concepts of veillance flux density, etc., here, see Part 3 of this set of papers: Veillance)
• optional reading: 3 page excerpt from comparametrics HDR book chapter,
• optional reading: Adnan's notes for invited guest lecture 2023feb09: