Instructor: Professor Steve Mann, PhD (MIT '97), P. Eng. (Ontario), Chief Scientist of the Rotman School of Management's CDL, and widely known as "the father of wearable computing" and inventor of HDR (High Dynamic Range) imaging (bio).
Multidisciplinary course offering:
Course text: "Intelligent Image Processing", by S. Mann, Wiley-Interscience
Modules
Course website:
http://wearcam.org/ece516/
Our Spaceglasses product won "Best Heads-Up Display" at CES2014.
Our InteraXon product was said to be
the most important wearable of 2014.
Assignment 2 for 2015:
Make a double-exposure picture.
Assignment 3 for 2015:
Make a 1-pixel camera. Described in more detail in lecture of Monday
2015feb2.
Assignment 4 for 2015:
Calibrate your 1-pixel camera as described in lecture Monday 2015feb09.
See "Photocell experiment" below.
In particular, create a comparagraph of f(kq) vs. f(q),
with well-labelled axes, data points and rigourously defined variables.
Take one data set while trying to exactly double/halve the quantity of light,
and graph it. Graph another data set while changing the quantity of light by
steps in a different ratio. How would you fit a function to this relationship?
Is it possible to figure out the original function f(q) v.s. q?
What does this represent?
Bonus marks for doing this with AC (alternating current) signals and
quadrature detection (e.g. building an oscillator and detector circuit).
See University of Colorado, Phyics 3340 for example.
Bonus marks still available for feedbackography, but this time, let's
"raise the bar" a bit (in fairness to those who got this working last time)
and get the feedback extending over a further range (e.g. greater distance
from the camera with a good visible image).
See an example here,
and also here's some info on
animations
in .gif images.
2015 Exam schedule;
for the latest, check http://www.apsc.utoronto.ca/timetable/fes.aspx
Final exam schedule is usually announced on this website:
here, toward the end of the term.
Labs were organized according to these six units (the first unit on
KEYER, etc., includes more than one lab, because there is some intro
material, background, getting started, etc.).
Organization of the course usually follows the six chapters in the
course TEXTBOOK, but if you are interested in
other material please bring this to the attention of the course instructor
or TA and we'll try and incorporate your interests into the course design.
The course focuses on the future of computing and
what will become the most important
aspects of truly personal computation and communication.
Very quickly we are witnessing a merging of communications devices
(such as portable telephones) with computational devices (personal
organizers, personal computers, etc.).
The focus of this course is on the specific and fundamental aspects of
visual interfaces that will have greatest relevence and impact,
namely the notion of a computationally mediated reality,
as well as related topics such as Digital Eye Glass,
brain-computer interfaces (BCI), etc.,
as explored in collaboration with some of our startups, such as
Meta, and
InteraXon,
a spinoff company started by former
students from this course.
A computationally mediated reality is a natural extension
of next--generation computing.
In particular, we have witnessed a pivotal shift from mainframe computers
to the personal/personalizable computers owned and operated by individual
end users. We have also witnessed a fundamental change in the nature of
computing from large mathematical calculations, to the use of computers
primarily as a communications medium. The explosive growth of the
Internet, and more recently, the World Wide Web, is a harbinger
of what will evolve into a completely computer--mediated world in
which all aspects of life, not just cyberspace, will be online and
connected by visually based content and visual reality user interfaces.
This transformation in the way we think and communicate will not
be the result of so--called ubiquitous computing
(microprocessors in everything around us).
Instead of the current vision of
``smart floors'', ``smart lightswitches'', ``smart toilets'',
in ``smart buildings''
that watch us and respond to our actions,
what we will witness is the emergence of ``smart people'' ---
intelligence attached to people, not just to buildings.
This gives rise to what
Ray Kurzweil (Chief Enginering of Google),
Marvin Minsky (Father of Artificial Intelligence), and I
refer to as the "Sensory Singularity".
And this will be done,
not by implanting devices into the brain, but, rather,
simply by non--invasively ``tapping'' the highest bandwidth ``pipe''
into the brain, namely the eye. This so--called ``eye tap'' forms the
basis for devices that are currently built into eyeglasses
(prototypes are also being built into contact lenses) to tap into the
mind's eye.
EyeTap technology causes inanimate objects to suddently come to life
as nodes on a virtual computer network. For example, while
walking past an old building, the building may come to life with
hyperlinks on its surface, even though the building is not wired
for network connections in any way. These hyperlinks are merely a
shared imagined reality that wearers of the EyeTap technology
simultaneously experience.
When entering a grocery store, a milk carton may come to life,
with a unique message from a spouse, reminding the wearer of the
EyeTap technology to pick up some milk on the way home from work.
EyeTap technology is not merely about a computer screen inside
eyeglasses, but, rather, it's about enabling what is, in effect,
a shared telepathic experience connecting multiple individuals together
in a collective consciousness.
EyeTap technology will have many commercial applications, and emerge
as one of the most industrially relevant forms of communications
technology.
The WearTel (TM) phone, for example, uses EyeTap technology to allow
individuals to see each other's point of view.
Traditional videoconferencing
merely provides a picture of the other person.
But most of the time we call
people we already know, so it is far more useful for use to exchange
points of view. Therefore, the miniature laser light source inside the
WearTel eyeglass--based phone scans across the retinas of both parties
and swaps the image information, so that each person sees what the other
person is looking at. The WearTel phone, in effect, let's someone
``be you'',
rather than just ``see you''. By letting others put themselves
in your shoes and see the world from your point of view, a very powerful
communications medium results.
The course includes iPhone and Android phone technologies, and
eyeglass-based "eyePhone" hybrids.
Each year we modify the schedule to keep current with the latest research
as well as with the interests of the participants in the course. If you have
anything you find particularly interesting, let us know and we'll consider
working it into the schedule...
Another point of view is
here.
Choose one of:
Lab 1 results:
This display device could function with a wide range of different kinds of
wearable computing devices, such as your portable music player.
Today there were two really cool projects that deserve mention in the
ECE516 Hall of Fame:
No written or otherwise recorded report is required.
However, if you choose to write or record some form of report or
other support material, it need not be of a formal nature, but
you must, of course, abide by good standards of academic conduct,
e.g. any published or submitted material must:
It is expected that all sudents will have read and agree to the terms
of proper academic conduct. This usually happens and is introduced
in first year, but for anyone who happens to have missed it in earlier years,
it's here:
How Not to Plagiarize.
It's written mainly to apply to writing, but the ethical concept is
equally applicable to presentations, ideas, and any other representation
of work, research, or the like.
There are six modules (corresponding to Chapters 1 to 6 of the textbook),
plus a brief introduction, Module 0.
Humanistic Intelligence, Mann 1998.
Wearable Computing and IoT (Internet of Things);
The scalespace theory;
sur/sousveillance;
integrity; VeillanceContract;
Humanistic Intelligence;
MedialityAxis?
Overview of Mobile and Wearable Computing, Augmented Reality, and
Internet of Things.
Mann's IEEE T&S 2013 SMARTWORLD (Wearables + IoT + AR alliance).
The fundamental axes of the Wearables + IoT + AR space, Mann 2001.
Wearcam; SmartWatch; 6ense; ARhistory; MedialityAxis;
EyeBorg implant;
SWIM for phenomenaugmented reality.
Module 2 Personal Imaging: Historical overview:
1943, McCollum: Cathode ray tubes in a eyeglass frame
1961, Philco: HMD for remote video surveillance
1968, Ivan Sutherland: Computer graphics and HMD
1970s, M. Krueger: video projection system (non-portable, fixed to walls, etc.)
1970s and 1980s: Mann: Mobile, Wearable Wireless Free-roaming AR:
Wearable Computing, Wireless, Sensing, and Metasensing with light bulbs
Phenomenal Augmented Reality: Real world physical phenomena as the fundamental
basis of mobile and wearable AR.
The Silicon Switch: http://www.google.ee/patents/US3452215
The PHENOMENAmplifier and the S.W.I.M. (Mann 1974)
Lutron Capri, Joel S Spira US3032688 Publication 1 May 1962 Filing 15 Jul 1959
triac dimmer: 1966, Eugene Alessio, US3452215
philco hmd headsight Comeau Bryan
Comeau and Bryan, employees of Philco Corporation, constructed the first actual head-mounted display in 1961 (the theory of a HMD dates back to 1956).
sensorama
mobile and wearable computing
smartwatches
contact lens display
find triac light dimmer history
The Mediality Axis.
Theory of Glass.
Fundamentals of sensing for Wearables + IoT +AR:
The 1-pixel camera making a 1-pixel camera;
Phenomenaugmented Reality with the 1-pixel camera.
Metasensing is HI, case study, Confidence Maps in gesture-based wearable computing.
Comparametric Equations and Quantum Field Theory, physics (quantum and classical).
Lightfields, Lytro, dual spaces, nLux, etc..
Orbits, SLAM, etc.
See a review of our product by Laptop Magazine:
http://blog.laptopmag.com/meta-pro-glasses-hands-on
Each year we create totally new material to keep the course up-to-date,
and we also adapt the course to student interests each year. If there's
something you want covered in this course, be sure to let us know!
Assignments from previous years
(each one was due at the beginning of lab period):
Assignment 1 for 2015:
Fizzveillance Challenge.
We also have some wave analyzers as well as the
SR510 lock-in amplifier available.
Inventrepreneurship:
S. Mann's role as the Chief Scientist at
Rotman School of Management's Creative Destruction Lab brings us
a series
of inventions we can learn from and work with; ask Prof. Mann for the web link
URL.
(Click for more examples of instructor's contributions)
ECE516 (formerly known as ECE1766), since 1998 (2015 is its 18th year)
Teaching assistants:
Schedule for January 2015:
Hour-long lectures starting Mon 5pm, Wed. 5pm, and Thursday 3pm in
WB119 (Wallberg Building, Room 119)
Lab: Fri. 12noon - 3pm, lab, BA3135 or EA302 or alternate location depending on subject of lab
Important dates:
Each year this course is taught, times can be verified from the official schedule at: APSC 2010 Winter Undergraduate Timetable (this URL seems to have remained constant for a number of years now).
2014's exam was:
ECE516H1 Intelligent Image Processing
Type:X, Date: Apr 29, 2014
Time:09:30 AM
BA-2175
Course structure of previous years; will be customized to meet the interests of those enrolled each year, 2015:
Each year I restructure the course in order to match the interests of the
students enrolled, as well as to capture opportunities of new developments.
Links to some useful materials
"Manoel lives in California with his wife and children.
He admires Dr. Steve Mann,
who is considered the real father of wearable computers,
and David Rolfe, a notable Assembler programmer who created classic
arcade games in the 1980s.", Page xix
Course "roadmap" by lab units and corresponding book chapters:
PDF;
PostScript (idraw)
location of this course textbook in university of toronto bookstore:
Kevin reported as follows:
I just stopped by the UofT Bookstore, and to help the rest of the
students, I thought you could announce that the book is located in the
engineering aisle, and exactly to the left of the bookstore computer
terminal behind some Investment Science books.
Course summary:
The course provides the student with the fundamental knowledge needed
in the rapidly growing field of Personal Cybernetics
("minds and machines", e.g. mind-machine interfaces, etc.)
and Personal Intelligent Image Processing. These topics are
often referred
to colloquially as ``Wearable Computing'', ``Personal Technologies'',
``Mobile Multimedia'', etc..
Text:
Organization of the textbook
The course will follow very closely to the textbook which is organized into
these six chapters:
Collinearity criterion:
Other supplemental material
Lecture, lab, and tutorial schedule from previous years
Here is an example schedule from a previous year the course was taught.
Humanistic User Interfaces, e.g. "LiqUIface" and other novel
inputs that have the human being in the feedback loop of a
computational process.
Course Evaluation (grading):
This course was originally offered as ECE1766; you can see previous
version (origins of the course),
http://wearcam.org/ece1766.htm
for info from previous years.
Resources and info:
Supplemental material:
Above: one of our neckworn sensory cameras, designed and built
at University of Toronto, 1998, which later formed the basis
for Microsoft's sensecam.
CyborGLOG of Lectures from previous year
(...on sabbatical 2009, so last year the course was not offered. Therefore,
the must up-to-date previous course CyborGLOG was from 2008.)
My eyeglasses were recently broken (damaged) when I fell into a live
three-phase power distribution station that was for some strange
reason setup on a public sidewalk by a film production company.
As a result, my eyeglasses are not working to well. Here is
a poor quality but still somewhat useful (understandable) capture of the
lecture
as transmitted live (archive) (please forgive the poor eyesight
resulting from temporary replacement eyewear).
CyborGLOG of Labs
Other background readings:
Christina Mann's fun guide: How to fix things,
drill holes, install binding posts, and solder wires to terminals
Material from year 2007:
Lab 2007-0: Demonstration of an analog keyboard
Example of analog keyboard; continuous fluidly varying input space:
Lab 2007-1, Chapter 1 of textbook: Humanistic Intelligence
In lab 1 you will demonstrate your understanding of Humanistic Intelligence,
either by making a keyer, or by programming an existing keyer,
so that you can learn the overall concept.
Ideally we would have at least one person doing each part of this project
so that we can put a group together for the entire result (keyer).
OKI Melody 2870A spec sheet
The OKI Melody 2870A spec sheet
is here.
Lab 2007-2, Chapter 2 of textbook: Eyeglass-based display device
In this lab we will build a simple eyeglass-based display device,
having a limited number of pixels, in order to understand the concept
of eyeglass-based displays and viewfinders.
Lab 2007-3, Chapter 3 of textbook: EyeTap (will also give intro to photocell)
Presentation by James Fung:
Lab 2007-4, Chapter 4 of textbook: Photocell experiment
photocell experiment
and a recent publication
describing it.
David's comparametic analysis and CEMENTing of telescope images:
Lab 2007-5, Chapter 5 of textbook: Lightvectors
Lab 2007-6 and 7
Final projects:
something of your choosing, to show what you've learned so far.
If you choose not to provide a written report, but only to demonstrate
(verbal report, etc.), in the lab, you still need to state your
source and collaboration material.
Year 2006 info:
Keyer evauation is posted:
Lab 2
EyeTap lab: Explanation of how eyetap works;
demonstration of eyetap; demonstration of OPENvidia.
C.E.M.E.N.T. lab
Comparametrics lab: Recover the damage done by the Elipses of Evil,
on the Axes of Good: