ECE516 Lab 2 ("lab2"), 2023 "What does a camera measure?"
Lab 1 aksed "What is a camera" whereas Lab 2 asks
"What does a camera measure?"
That fundamental question about what a camera measures is what led
S. Mann to
invent HDR (High Dynamic Range) imaging
in the 1970s/80s, and develop
it further at MIT in the early 1990s.
Most research on image processing fails to fundamentally address the
natural philosophy (physics) of the quantity that a camera measures,
e.g. at the pixel level.
In some sense the research gets lost in a "forest of pixels" and can't
see the "trees" (individual pixels) for the "forest" (array of pixels).
You do a good job of studying forestry if you only look at forests and never
individual trees.
If we were to begin deeply understanding forests, we might wish to begin first
by really undrestanding what a tree is.
Likewise we'll never really understand what an image or picture is if we can't
first understand what a pixel (picture element) is.
Mann's philosophy that led to the invention of HDR was to regard a pixel as
a light meter, much like a photographic light meter.
Under this philosophy, a camera is regarded as an array of light meters,
each one measuring light coming from a different specific direction.
With that philosophy in mind, differently exposed pictures of the same subject
matter are just different measurments of the same reality.
These different measurements of the same reality can be combined to achieve
a better and better estimate of the true physical quantity.
This quantity is neither radiance, irradiance, or the like
(i.e. it does not have a flat spectral response), nor is it lumminance or
illuminance or the like (i.e. it does not match the photometric response of
the human eye).
It is something new, and specific to each kind of sensor.
It is called the "photoquantity" or the "photoquantigraphic unit", and
is usually denoted by the letter q. Often we'll regard the camera as an array
of light meters, perhaps sampled on a 2-dimensional plane, thus giving us a
array of photoquantities q(x,y), over continuous variables x and y, which may
be sampled over a discrete array of samples, which we might write as q[x,y]
using square brackets to denote the digital domain even as the range might
remain analog, or at least "undigital" (Jenny Hall's take on "undigital"). A typical camera reports on some function of q,
e.g. f(q(x,y)) or f(q[x,y]).
In order to really fundamentally understand what a camera measures,
we're going to build a really simple 1-pixel camera.
Since the camera has only 1 pixel, we won't be distracted by spatially varying
quantities and thus we won't be distrated by the
domain (x,y) or [x,y].
This will force us to focus on the range, q,
and this range is in fact the domain of f.
Thus our emphasis here is on dynamic range!
There are two parts to Lab 2:
- 2a. Build a light-meter; that will be your 1-pixel camera:
Quantimetric/Quantigraphic sensing....
- 2b. Use your lightmeter to make a 1-pixel camera and capture its
metaveillance (metavision).
There are 3 ways to make your lightmeter:
- 2a.i Beginner: photocell plus multimeter;
- 2a.ii Intermediate: photocell plus microcontroller;
- 2a.iii Advanced: photocell pair in Whatstone bridge plus microcontroller with differential analog input.
Any suitable microcontroller... we'll use as case-study, ESP32 (WROOM)
such as
ESP-WROOM-32, ESP32-S with WiFi and Bluetooth
which will make it easy to directly connect to the metaverse and
extended metaverse.
Marking:
Post your results to this Instructable
(One-Pixel Camera...) by Wednesday Feb. 15th at 3pm
and double-check that the post is present in the "I made it" list.
Then present it on Thursday Feb. 16th 9am lab.
- 1/10 Electrical build, e.g. reading out the quantigraphic quantity, q.
- 1/10 Collect quantigraphic data from your lightmeter:
obtain ordered pairs (f,g).
- 1/10 Plot g as a function of f (comparagraph).
- 1/10 Plot f as a function of q (response function).
- 1/10 Determine the mathematical relationship between f and q, based on
the data for your camera and compare with the same photocell as measured in
class. In your case you can also get the dark resistance (measure the
resistance in total darkness) and do constrained regression (forced
through that data point).
- 1/10 Mechanical+optical build of the 1-pixel camera (nice box with
pinhole or lens).
- 2/10 While observing numbers coming out of your camera (or a plot),
move a point light source back-and-forth to determine your camera's
field-of-view
at various distances from the Aperture (pinhole or lens).
A small Lamp (shown in red in the diagram) works best, e.g.
perhaps the LED that was given out in class.
Record the position of each of the points where the light falls to
about half what it is inside the camera's field-of-view.
These are the bondary points.
For simplicity, confine the analysis to a plane (2D slice)
passing through the optical axis (O.A.).
The O.A. is a line drawn through the center of the Aperture,
perpendicular to the image plane where the Photocell is located.
It intersects the image plane at the Principal Point (P.P.).
See the diagram below.
- 2/10 Using Octave and OpenBrush as a 3D plotter,
plot a box (rectangle) that represents the camera,
together with
the
camera's metaveillance flux as concentric arcs within the
plane of your measurements, and capture by way of a screenshot
of your OpenBrush screen. Draw all the arcs with the same center
which should be the center of the Aperture.
You can use "monoscopic" mode for this.
The result might look something like this, but you don't need that many
arcs, just keep it simple if you prefer:
- Optional (can get more than 10/10): Make some measurements in 3D (three dimensions),
e.g. perhaps in two intersecting planes, or around
circles or rectangles and
Plot the metaveillance flux in 3D, together with
a photograph of your camera in the extended metaverse
(rather than just drawing a rectangular box to
represent the camera).
It might look something like this (below is
the rectangular field of view of the infrared depth camera of
a Delta automatic flushometer in
the 3rd flr. men's room of Engineering Annex):
or it might look something like this (just showing the outlines):
Post your results to
this Instructable (One-pixel Camera...) on or before Wed. Feb. 15th at 3pm, and then bring to class Thu. Feb. 16th 9am for presentation..
Optional fun: you can compare with other
data gathered in a previous year's lecture
(link) and with the
data gathered from the Blue Sky solar cell (link)
See also the
Photocell Experiment.
and the Instructable entitled Phenomenological Augmented Reality:
References:
• Prof. Wang's reference document
• Kineveillance
look at Figures 4, 5, and 6,
and Equations 1 to 10.
• The concept of
veillance flux (link);
• (optional reading Minsky, Kurzweil, and Mann, 2013);
• (optional reading
Metavision);
• (optional reading Humanistic Intelligence, see around Figure 3 of
this paper)
• (optional reading: If you like mathematics and physics,
check out the concepts of veillance flux density, etc.,
here, see Part 3 of this set of papers: Veillance)
• optional reading: 3 page excerpt from comparametrics HDR book chapter,
http://wearcam.org/ece516/comparametrics_scalingop_3page_book_excerpt.pdf
• optional reading: Adnan's notes for invited guest lecture
2023feb09:
http://wearcam.org/ece516/comparam_lecture_adnan_2023feb09