Word format PDF format

1st ACM Workshop on
Continuous Archival and Retrieval of Personal Experiences

New York, New York, October 15th 2004

in conjunction with ACM Multimedia 2004

Workshop Scope

Human beings have always been interested in personal media capture to sample and archive their experiences. The technology to support this endeavor has progressed from diaries and painting, through pocket cameras, to the current era where capture is digital, and sound and image recording can be supplemented with such data as temperature, heart rate, location, web pages visited, compute/device usage logs, etc.

A large proportion of multimedia research has focused on the representation, archival and transmission of media related to isolated events – single / groups of images of a party, a video of a graduation ceremony etc. This workshop will focus on an emerging area of research that deals with the continuous archival and retrieval of all media relating to personal experiences. The continuous archival paradigm fundamentally alters our relationship to biological memory, since analysis of such media powerfully augments human memory.

Personal storage of all one's media throughout a lifetime has been desired and discussed since at least 1945, when Vannevar Bush published As We May Think, positing the “Memex” device “in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” His vision was astonishingly broad for the time, including full-text search, annotations, hyperlinks, virtually unlimited storage and even stereo cameras mounted on eyeglasses. In 2004, storage, sensor, and computing technology have progressed to the point of making Memex feasible and even affordable. Indeed, we can now look beyond Memex at new possibilities.

Research is required to advance the capture, management, and usage of continuously archived data. This workshop aims to bring together researchers from around the world to share their findings and insights into this burgeoning field. This workshop will have not more than 30 participants, to insure lively interaction. Approximately nine papers will be presented, with preference given to papers that are multi-media and/or multi-disciplinary. The program will include eight demonstrations, a panel discussion, and two invited speeches.


We invite regular and position papers as well as demonstrations (accompanied by descriptive papers) on relevant topics, including:

  • Capture/sensors (e.g., scanning, wearable, embedded, different kinds of sensors, robotic assistance), experiential sampling.
  • Data storage, management, organization and retrieval
  • Insight: content analysis and data mining
  • User interface issues, including: visualization, authoring, story-telling, annotation
  • Applications: e.g., personal museum, health-support, childcare, research tools, meeting capture.
  • Security, privacy, and legal issues


Jim Gemmell, Microsoft Research
Hari Sundaram, Arizona State U.

Program Committee
Kiyoharu Aizawa, U. Tokyo
Shih-Fu Chang, Columbia University
Mike Christel , Carnegie Mellon U.
Steven Drucker, Microsoft Research
Ramesh Jain , Georgia Tech
Steve Mann , U. Toronto
Kai Li, Princeton University
Kenji Mase, Nagoya U./ATR
Alex Pentland, MIT Media Lab
Dennis Quan, IBM Research
Jun Rekimoto, Sony Laboratories
Ehud Ritter, U. Aberdeen
Cyrus Shahabi, U. SouthernCalifornia
Ben Shneiderman, U. Maryland
John Smith , IBM Research
Kentaro Toyama, Microsoft Research
Matthew Turk , UC Santa Barbara
Ken Wood, Microsoft Research
Hong-Jiang Zhang, Microsoft Research

Invited Speakers
Steve Mann , U. Toronto
Gordon Bell, Microsoft Research

Important Dates
Jul 5 Submission deadline
Jul 28  Acceptance notification
Aug 6 Camera-ready deadline
(all deadlines 5:00 PM PDT)

Applicants must submit papers with name and contact information via email to Jim Gemmell Technical papers may be up to 16 pages in length, while demonstration papers may be up to 6 pages. All papers must follow standard ACM style guidelines and must be submitted in pdf format.

Web Site