CALL FOR PAPERS


***NOTE THE EXTENDED DEADLINE AT END***

TWO-VOLUME SPECIAL ISSUE ON MEDIATED REALITY

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION

Co-editors:

Personal Cybernetics and Humanistic Intelligence are new and rapidly growing fields of research in the area of human-computer interactions. These areas of research involve personal wearable imaging devices with intelligence that arises from the existence of a human user in the feedback loop of a computational process, in which the human user and the computational process are inextricably intertwined. Unlike the typical goal of Artificial Intelligence (AI) which is to emulate human intelligence with computers, Humanistic Intelligence (HI) creates a close synergy in which Intelligent Signal Processing is used to harness the processing power of the human brain. HI gives rise to a symbiosis between human and computer in which each uses the other within a closely coupled signal processing feedback loop. The computer performs basic low level signal processing functions, using data obtained from a first person perspective (wearable camera, microphones, miniature wearable radar, biosensors, etc.) while the human performs the high-level cognitive tasks. Personal Cybernetics and Humanistic Intelligence form a basis for augmenting, deliberately diminishing, or otherwise altering the visual perception of reality. Although the visual modality is most often used in mediated reality systems based on current technology, other modalities such as touch, taste, and olfaction may be mediated as well. In the visual domain, a system that can augment, diminish, or otherwise alter the visual perception of reality is called a "Reality Mediator". Reality Mediators are useful, for example, in applications involving the visually challenged. In this application, Reality Mediators simplify the visual information presented to the wearer. Mediated Reality may also serve as a framework for filtering out real-world spam (advertising billboards, etc.) and for allowing individuals to communicate with one another by altering each other's perception of reality. In Mediated Reality, a wearer of the apparatus may, for example, be shopping at the grocery store while a remote spouse/friend can view the transmitted video in a stabilized coordinate system and then draw directly on the retina of the wearer of the glasses using a "directed laser beam". In this way, for example, a remote individual can collaborate with the wearer of the apparatus in everyday experiences like shopping for a new car, sightseeing, etc. In addition to the current direction of research in HI (e.g. Personal Imaging, and the field of Personal Technologies in general), the Two-Volume Special Issue will also include papers in the field of Rehab. Medicine. In particular, prosthetic devices that improve the quality of everyday lives of the visually challenged, whether at work, at play, or just walking down the street, will also be an important part of this new research direction.

QUALITY OF LIFE FOCUS:

A particular goal of the Two-Volume Special Issue will be to focus on personal devices for ordinary people to use in their everyday lives. We encourage submittal of papers that look beyond increasing the productivity and obedience of employees in the workplace, and instead consider improving the quality of all aspects of life, not just work.

SUMMARY OF TWO-VOLUME SPECIAL ISSUE ON MEDIATED REALITY

Mediated Reality is at the intersection of four related fields:
(1) Telephony, wireless communications, videoconferencing, etc.;
(2) Photography/Videography, electronic newsgathering (ENG), etc.;
(3) Visual Science, e.g. Optometry, visual aids, night vision systems; and
(4) Human-Computer Interaction (HCI)
The proposed Two-Volume Special Issue will be comprised of papers documenting High-quality research on the topics of interest below:
1. Image processing for Personal Imaging systems
2. Signal processing for Wearable Cybernetics
3. Wearable visual information processing
4. Wearable applications of image processing
5. Video-based personal safety devices for use by ordinary citizens to help
   them participate in crime reduction
6. Fusion of wearable video and other sensing modalities
7. Visual and other modality prostheses
8. Videographic/photographic memory prostheses
9. Visualization and data dissemination from personal imaging systems
10.Innovative vision-based devices and systems
11.Innovative eyewear
12.Vision aids for the blind or partially sighted
13.Night Vision Goggles (NVG) and low-light visual aids
14.Vision aids for those with visual memory or visual processing disability
15.Innovative wearable video display or processing technologies
16.Visual pattern recognition systems suitable for use in personal imaging
17.Computer supported collaborative living
18.VideoOrbits image processing and algebraic projective geometry
19.Collaborative cybernetic photography/videography and shared visual space
20.New paradigms in photography, videography, photojournalism, and wearable
   electronic news gathering
21.Signal processing of Eye Tap video signals and systems
22.Issues in User-Interface Design
23.Empirical Studies

SUBMISSION FORMAT:

Papers can be submitted in several formats. The preferred format is a uuencoded tarfile of a LaTeX source with figures as separate files but other GPL editable formats such as cleanly written (e.g. not messy output of a converter) HTML are acceptable as are papers submitted in other standard formats.

SUBMIT PAPERS ELECRONICALLY TO:

Steve Mann
University of Toronto
Department of Electrical Engineering, Room S.F. 2001,
10 King's College Road; Toronto, Ontario, Canada; M5S 3G4
Tel. 416.946-3387
Fax. 416.971-2326
mann@eecg.toronto.edu

Woodrow Barfield
250 New Engineering Building
Industrial and Systems Engineering
Virginia Tech
Blacksburg, VA 24061-0118
Tel. 540.231-2547
Fax. 540.231-3322
barfield@vt.edu
For updates, etc., see: http://wearcam.org/ijhci_cfp.htm

TIMELINE:

If the following deadlines are adhered too (early submission is encouraged), the IJHCI can provide a fast turn-around for the Two-Volume special edition.
Papers submitted by November 30, 1999.
Final papers by February 15, 1999.