ECE516: Intelligent Image Processing

Schedule Labs Philosophy Opportunities 20-Year ECE516 History

20 Years of ECE516 History

Historical throwback to ECE516 in the year 2018:

Every year we totally redesign the course to keep up with current trends and current events as well as to keep up with student interests and pivot toward what each year's students are most passionate about. Therefore previous years' course material should not be seen as limiting what we will learn this year! You can see some material from previous years' ECE516 below, e.g. from a couple of years prior to the pandemic:

Wearable Computing, AI/HI (Humanistic Intelligence), HuMachine Learning, IoTTT (Internet of Things That Think), AR/VR (Augmediated/Virtual Reality), and Extended Intelligence

The Birth of Wearable Computing: 41 years of Augmented Reality. (Left-to-right): Steve Mann, age 12, Sequential Wave Imprinting Machine (SWIM) in 1974; Stephanie Mann, Age 9, Robot for visualizing ElectroMagnetic Wave Propagation; Jayse Hansen, Hollywood's #1 UI designer (now a Meta employee) with Meta Glass; Metasensing (visualizing vision and sensing sensors and their capacity to sense).

The intro video below, at the beginning of this TED talk, is by a student who delivered something creative as a first lab. It utilized his expertise in film to contribute to the world of Intelligence Image Processing, Augmented Reality, and Humanistic Intelligence. Likewise, you may explore the intersection of what you are good and the world of VR/AR, IoT, Wearables, and HuMachine Learning (AI/HI).

Lab 1 (link to info) is due 2018 January 12th, at 9am, or by special arrangement (at other time) for those with conflict.

Remember the scheduled lab times in BA3155 and BA3165 are grading sessions; work should be completed prior to arrival. Also remember that you're free to propose your own labs in place of any of the standard assigned labs; just make sure to get the project approved.

Labs 1, 2, and 3, form the Introductory Trilogy (link).

Lab 4 Instructable

Lab 5

2018, Lecture 2, showing some concepts to help you understand the material in Chapter 1...

Pictures from previous year's students with Lab 2:

(including picture from Helton Chen)

See the Lab 3 Instructable (link):

Pictures from previous year's students with lab3:

2016 Lab4, opportunity to build your own EyeTap, or to explore course material on HI (HuMachine Learning):

Pictures from lab4:

Pictures from lab5 (just back from Reading Week):

Also, feel free to start right now on your final project if you like, as you can build up to it each week....
A good final project would be on Integral Kinematics and Integral Kinesiology: Absement (Absition)

Select from this list of 23 projects, or propose your own to the professor for approval.

Intro example of Augmented Reality for lecture 1: visualizing magnetic waves with a loop antenna; notice how the radio wave "sits" still ("sitting wave" rather than standing wave):

Pictures from first lecture: (link to additional pictures)
(Ceiling mount motion sensor)

Work with us over the summer: NSERC USRA (clicking on the NSERC USRA link:

Starts Monday Jan. 4, 2016, available to 3rd and 4th yr. undergrad or grad. students in any discipline (EngSci, ECE, MIE, Computer Science, etc.)

Slide deck for Lecture 1 course intro is here.

Instructor, S. Mann (bio)

Multidisciplinary course with opportunities beyond the classroom:

About the course: A motto we live by is the IEEE's motto "Advancing Technology for Humanity" (IEEE is the world's largest technical society), and that was the topic of our IEEE ISTAS conference:

Course text: "Intelligent Image Processing", by S. Mann, Wiley-Interscience

Course website:

Course instructor, Prof. Steve Mann

Our Spaceglasses product won "Best Heads-Up Display" at CES2014.
See a review of our product by Laptop Magazine:

Our InteraXon product was said to be the most important wearable of 2014.

Assignments (each one is due at the beginning of lab period):

Assignment 1 for 2015: Fizzveillance Challenge.

Assignment 2 for 2015: Make a double-exposure picture.

Assignment 3 for 2015: Make a 1-pixel camera. To be described in more detail in lecture of Monday 2015feb2.

Assignment 4 for 2015: Calibrate your 1-pixel camera as described in lecture Monday 2015feb09. See "Photocell experiment" below. In particular, create a comparagraph of f(kq) vs. f(q), with well-labelled axes, data points and rigourously defined variables. Take one data set while trying to exactly double/halve the quantity of light, and graph it. Graph another data set while changing the quantity of light by steps in a different ratio. How would you fit a function to this relationship? Is it possible to figure out the original function f(q) v.s. q? What does this represent?

Bonus marks for doing this with AC (alternating current) signals and quadrature detection (e.g. building an oscillator and detector circuit). See University of Colorado, Phyics 3340 for example.
We also have some wave analyzers as well as the SR510 lock-in amplifier available.

Bonus marks still available for feedbackography, but this time, let's "raise the bar" a bit (in fairness to those who got this working last time) and get the feedback extending over a further range (e.g. greater distance from the camera with a good visible image). See an example here, and also here's some info on animations in .gif images.


S. Mann's role as the Chief Scientist at Rotman School of Management's Creative Destruction Lab brings us a series of inventions we can learn from and work with; ask Prof. Mann for the web link URL.

(Click for more examples of instructor's contributions)

ECE516 (formerly known as ECE1766), since 1998 (2018 is its 21st year)

Teaching assistants:

Schedule for January 2018:

One hour lectures starting 11am on Mon, Wed, and Thursday:
Lec 0101 MONDAY   16:00-17:00 GB 120 
Lec 0101 TUESDAY  12:00-13:00 GB 120 
Lec 0101 THURSDAY 16:00-17:00 GB 120 
Pra 0101 FRIDAY   09:00-12:00 BA 3165 and BA 3155 
Office hours: Mon. 17h, Tues 13h, Thurs, 17h, in EA302 or SF2001 office.

Three office hours/week: the hour immediately following each lecture.
Office hours are Mon. 5pm, Tuesday 1pm, and Thursday 5pm, EA302, or in the classroom if available, or, for privately scheduled meetings, in Prof. Mann's office = SF2001.

Lab: Mon. 3pm to 6pm BA3155 and 3165, or EA302 or alternate location depending on subject of lab Important dates 2015 (to be updated 2016): Each year this course is taught, times can be verified from the official schedule at: APSC 2010 Winter Undergraduate Timetable (this URL seems to have remained constant for a number of years now).

Exam schedule subject to change; for the latest, check
As an example of a typical exam time and type, in a previous year, the exam was:

ECE516H1 Intelligent Image Processing 
Type:X, Date: Apr 29, 2014 
Time:09:30 AM

Course structure of previous years; will be customized to meet the interests of those enrolled this year, 2015:

Each year I restructure the course in order to match the interests of the students enrolled, as well as to capture opportunities of new developments.

Final exam schedule is usually announced on this website: here, toward the end of the term.

Links to some useful materials

An example of a previous year's course "roadmap" by lab units and corresponding book chapters:

PDF; PostScript (idraw)

Labs were organized according to these six units (the first unit on KEYER, etc., includes more than one lab, because there is some intro material, background, getting started, etc.).

Organization of the course usually follows the six chapters in the course TEXTBOOK, but if you are interested in other material please bring this to the attention of the course instructor or TA and we'll try and incorporate your interests into the course design.

location of this course textbook in university of toronto bookstore:

Kevin reported as follows:
I just stopped by the UofT Bookstore, and to help the rest of the
students, I thought you could announce that the book is located in the
engineering aisle, and exactly to the left of the bookstore computer
terminal behind some Investment Science books.

Course summary:

ECE516 is aimed primarily at third and fourth year undergraduates, and first year graduate students. 4th year undergraduates often take this course as their "other technical elective" (fourth year elective). The classes are comprised of lectures and labs (labs have both a tutorial component and a grading component, etc.) starting in January, along with a final exam in April.

The course provides the student with the fundamental knowledge needed in the rapidly growing field of Personal Cybernetics ("minds and machines", e.g. mind-machine interfaces, etc.) and Personal Intelligent Image Processing. These topics are often referred to colloquially as ``Wearable Computing'', ``Personal Technologies'', ``Mobile Multimedia'', etc..

The course focuses on the future of computing and what will become the most important aspects of truly personal computation and communication. Very quickly we are witnessing a merging of communications devices (such as portable telephones) with computational devices (personal organizers, personal computers, etc.).

The focus of this course is on the specific and fundamental aspects of visual interfaces that will have greatest relevence and impact, namely the notion of a computationally mediated reality, as well as related topics such as Digital Eye Glass, brain-computer interfaces (BCI), etc., as explored in collaboration with some of our startups, such as Meta, and InteraXon, a spinoff company started by former students from this course.

A computationally mediated reality is a natural extension of next--generation computing. In particular, we have witnessed a pivotal shift from mainframe computers to the personal/personalizable computers owned and operated by individual end users. We have also witnessed a fundamental change in the nature of computing from large mathematical calculations, to the use of computers primarily as a communications medium. The explosive growth of the Internet, and more recently, the World Wide Web, is a harbinger of what will evolve into a completely computer--mediated world in which all aspects of life, not just cyberspace, will be online and connected by visually based content and visual reality user interfaces.

This transformation in the way we think and communicate will not be the result of so--called ubiquitous computing (microprocessors in everything around us). Instead of the current vision of ``smart floors'', ``smart lightswitches'', ``smart toilets'', in ``smart buildings'' that watch us and respond to our actions, what we will witness is the emergence of ``smart people'' --- intelligence attached to people, not just to buildings.

This gives rise to what Ray Kurzweil (Chief Enginering of Google), Marvin Minsky (Father of Artificial Intelligence), and I refer to as the "Sensory Singularity".

And this will be done, not by implanting devices into the brain, but, rather, simply by non--invasively ``tapping'' the highest bandwidth ``pipe'' into the brain, namely the eye. This so--called ``eye tap'' forms the basis for devices that are currently built into eyeglasses (prototypes are also being built into contact lenses) to tap into the mind's eye.

EyeTap technology causes inanimate objects to suddently come to life as nodes on a virtual computer network. For example, while walking past an old building, the building may come to life with hyperlinks on its surface, even though the building is not wired for network connections in any way. These hyperlinks are merely a shared imagined reality that wearers of the EyeTap technology simultaneously experience. When entering a grocery store, a milk carton may come to life, with a unique message from a spouse, reminding the wearer of the EyeTap technology to pick up some milk on the way home from work.

EyeTap technology is not merely about a computer screen inside eyeglasses, but, rather, it's about enabling what is, in effect, a shared telepathic experience connecting multiple individuals together in a collective consciousness.

EyeTap technology will have many commercial applications, and emerge as one of the most industrially relevant forms of communications technology. The WearTel (TM) phone, for example, uses EyeTap technology to allow individuals to see each other's point of view. Traditional videoconferencing merely provides a picture of the other person. But most of the time we call people we already know, so it is far more useful for use to exchange points of view. Therefore, the miniature laser light source inside the WearTel eyeglass--based phone scans across the retinas of both parties and swaps the image information, so that each person sees what the other person is looking at. The WearTel phone, in effect, let's someone ``be you'', rather than just ``see you''. By letting others put themselves in your shoes and see the world from your point of view, a very powerful communications medium results.

The course includes iPhone and Android phone technologies, and eyeglass-based "eyePhone" hybrids.


Organization of the textbook

The course will follow very closely to the textbook which is organized into these six chapters:
  1. Personal Cybernetics: The first chapter introduces the general ideas of ``Wearable Computing'', personal technologies, etc. See
  2. Personal Imaging: (cameras getting smaller and easier to carry), wearing the camera (the instructor's fully functioning XF86 GNUX wristwatch videoconferencing system,; wearing the camera in an "always ready" state
  3. Mediated Reality and the EyeTap Principle.
      Collinearity criterion:
    • The laser EyeTap camera: Tapping the mind's eye: infinite depth of focus
    • Contact lens displays, blurry information displays, and vitrionic displays
  4. Comparametric Equations, Photoquantigraphic Imaging, and comparagraphics (see
  5. Lightspace:
    • lightvector spaces and anti-homomorphic imaging
    • application of personal imaging to the visual arts
  6. VideoOrbits and algebraic projective geometry (see; Computer Mediated Reality in the real world; Reality Window Manager (RWM).

Other supplemental material

  1. Chording keyer (input device) for wearable/portable computing or personal multimedia environment
  2. fluid user interfaces
  3. previously published paper on fluid user interfaces
  4. University of Ottawa: Cyborg Law course See, also, the University of Ottawa site, and article on legal and philosophical aspects of Intelligent Image Processing
  5. photocell experiment
  6. "Recording 'Lightspace' so shadows and highlights vary with varying viewing illumination", Optics Letters, Vol. 20, Iss. 4, 1995 ("margoloh")
  7. Example from previous year's work: data from final lab, year 2005: lightvectors and lightspace (See readme.txt file).

Lecture, lab, and tutorial schedule from previous years

Here is an example schedule from a previous year the course was taught.

Each year we modify the schedule to keep current with the latest research as well as with the interests of the participants in the course. If you have anything you find particularly interesting, let us know and we'll consider working it into the schedule...

  1. Week1 (Tue. Jan. 4 and Wed. Jan. 5th): Humanistic Intelligence for Intelligent Image Processing
    Humanistic User Interfaces, e.g. "LiqUIface" and other novel inputs that have the human being in the feedback loop of a computational process.
  2. Week2: Personal Imaging; concomitant cover activity and VideoClips; Wristwatch videophone; Telepointer, metaphor-free computing, and Direct User Interfaces.
  3. Week3: Atmel AVR, handout of circuitboards for keyers, etc.; wiring instructions are now on this www site at
  4. Week4: EyeTap part1; technology that causes the eye itself to function as if it were both a camera and display; collinearity criterion; Calibration of EyeTap systems; Human factors and user studies.
  5. Week5: Eyetap part2; Blurry information displays; Laser Eyetap; Vitrionics (electronics in glass); Vitrionic contact lenses.
  6. Week6: Comparametric Equations part1.
  8. Week8: Comparametric Equations part2.
  9. Week9: Comparametric Equations part3.
  10. Week10: Lightspace and anti-homomorphic vectorspaces.
  11. Week11: VideoOrbits, part1; background
  12. PDC intensive course may also be offered around this time;
  13. Week12: VideoOrbits, part2; Reality Window Manager (RWM); Mediated Reality; Augmented Reality in industrial applications; Visual Filters; topics for further research (graduate studies and industrial opportunities).
  14. Week13; review for final exam;
  15. Final Exam: standard time frame usually sometime between around mid April and the end of April.

Course Evaluation (grading):

This course was originally offered as ECE1766; you can see previous version (origins of the course), for info from previous years.

To see the course outline of other previous years, visit, for example,

Resources and info:

Supplemental material:

CyborGLOG of Lectures from previous year

(...on sabbatical 2009, so last year the course was not offered. Therefore, the must up-to-date previous course CyborGLOG was from 2008.)

CyborGLOG of Labs

Other background readings:

Christina Mann's fun guide: How to fix things, drill holes, install binding posts, and solder wires to terminals

Material from year 2007:

Lab 2007-0: Demonstration of an analog keyboard

Example of analog keyboard; continuous fluidly varying input space:

Lab 2007-1, Chapter 1 of textbook: Humanistic Intelligence

In lab 1 you will demonstrate your understanding of Humanistic Intelligence, either by making a keyer, or by programming an existing keyer, so that you can learn the overall concept.

Choose one of:

Ideally we would have at least one person doing each part of this project so that we can put a group together for the entire result (keyer).

Lab 1 results:

OKI Melody 2870A spec sheet

The OKI Melody 2870A spec sheet is here.

Lab 2007-2, Chapter 2 of textbook: Eyeglass-based display device

In this lab we will build a simple eyeglass-based display device, having a limited number of pixels, in order to understand the concept of eyeglass-based displays and viewfinders.

This display device could function with a wide range of different kinds of wearable computing devices, such as your portable music player.

Link to ECE516 Lab 2, 2007

Lab 2007-3, Chapter 3 of textbook: EyeTap (will also give intro to photocell)

Presentation by James Fung:

Lab 2007-4, Chapter 4 of textbook: Photocell experiment

photocell experiment and a recent publication describing it.

Example of linear regression

Today there were two really cool projects that deserve mention in the ECE516 Hall of Fame:
David's comparametic analysis and CEMENTing of telescope images:

Peng's tone generator:

Lab 2007-5, Chapter 5 of textbook: Lightvectors

Lab 2007-6 and 7

Final projects: something of your choosing, to show what you've learned so far.

No written or otherwise recorded report is required.

However, if you choose to write or record some form of report or other support material, it need not be of a formal nature, but you must, of course, abide by good standards of academic conduct, e.g. any published or submitted material must:

If you choose not to provide a written report, but only to demonstrate (verbal report, etc.), in the lab, you still need to state your source and collaboration material.

It is expected that all sudents will have read and agree to the terms of proper academic conduct. This usually happens and is introduced in first year, but for anyone who happens to have missed it in earlier years, it's here: How Not to Plagiarize. It's written mainly to apply to writing, but the ethical concept is equally applicable to presentations, ideas, and any other representation of work, research, or the like.

Year 2006 info:

Keyer evauation is posted:

Lab 2
EyeTap lab: Explanation of how eyetap works; demonstration of eyetap; demonstration of OPENvidia.
C.E.M.E.N.T. lab
Comparametrics lab: Recover the damage done by the Elipses of Evil, on the Axes of Good:

Course poster