ARTICLE SUMMARIES
Computer, Vol. 30, No. 2, February 1997


Wearable Computing: A First Step Toward Personal Imaging

Steve Mann
Massachusetts Institute of Technology, Building E15-383, Cambridge, MA02139
Author currently with
University of Toronto,
10 King's College Road, Room 2001,
Toronto, Ontario,
Canada, M5S 3G4;
mann@eecg.toronto.edu

Miniaturization of components has enabled systems that are wearable and nearly invisible, so that individuals can move about and interact freely, supported by their personal information domain.

Can you imagine hauling around a large, light-tight wooden trunk containing a co-worker or an assistant whom you take out only for occasional, brief interaction. For each session, you would have to open the box, wake up (boot) the assistant, and afterward seal him back in the box. Human dynamics aside, wouldn't that person seem like more of a burden than a help? In some ways, today's multimedia portables are just as burdensome.

Let's imagine a new approach to computing in which the apparatus is always ready for use because it is worn like clothing. The computer screen, which also serves as a viewfinder, is visible at all times and performs multimodal computing (text and images).

With the screen moved off the lap and up to the eyes, you can simultaneously talk to someone and take notes without breaking eye contact. Miniaturized into an otherwise normal pair of eyeglasses, such an apparatus is unobtrusive and useful in business meetings and social situations.

Clothing is with us nearly all the time and thus seems like the natural way to carry our computing devices. Once personal imaging is incorporated into our wardrobe and used consistently, our computer system will share our first-person perspective and will begin to take on the role of an independent processor, much like a second brain-or a portable assistant that is no longer carted about in a box. As it "sees'' the world from our perspective, the system will learn from us, even when we are not consciously using it.

Such computer assistance is not as far in the future as it might seem. Researchers were experimenting in related areas well before the late seventies, when I first became interested in wearable computing devices. Much of our progress is due to the computer industry's huge strides in miniaturization. My current wearable prototype,1 equipped with head-mounted display, cameras, and wireless communications, enables computer-assisted forms of interaction in ordinary situations-for example, while walking, shopping, or meeting people-and it is hardly noticeable.

Computer, Vol. 30, No. 2, February 1997
Copyright (c) 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved


Investigating the Influence of Formal Methods

Shari Lawrence Pfleeger
Systems/Software Inc. and Howard University

Les Hatton
Oakwood Consultancy

Formal methods promise much, but can they deliver? In this project, results are inconclusive, but careful data gathering and analysis helped establish influences on product quality.

Practitioners and researchers continue to seek methods and tools for improving software development processes and products. Candidate technologies promise increased productivity, better quality, lower cost, or enhanced customer satisfaction. But we must test these methods and tools empirically and rigorously to determine any significant, quantifiable improvement. We tend to consider evaluation only after using the technology, which makes careful, quantitative analysis difficult if not impossible. However, when an evaluation is designed as part of overall project planning, and then carried out as software development progresses, the result can be a rich record of a tool's or technique's effectiveness.

In this study, we investigated the effects of using formal methods to develop an air-traffic-control information system. Because we are studying one project in isolation, we cannot draw conclusions about the suitability of formal methods for all projects. As we describe in the sidebar "Can Formal Methods Always Deliver?" the jury is still out on when and whether formal methods improve products. Nevertheless, the lessons we learned are instructive, not only in showing how formal methods influenced code quality on this project, but also in highlighting the limitations of retrospective studies and their use in planning follow-up investigations.

We describe what we did, as well as what we could have done had the study been carried out as the software system was being developed and tested. We also show how this preliminary investigation helps to suggest hypotheses for further studies. Thus, the lessons we learned can be applied not only to gauge the effects of formal methods but also in planning similar studies of other techniques and tools.

The procedure we used was not predetermined; the results of one analysis step largely determined where we went next. Indeed, research often involves following one trail and then another, uncovering relationships and unearthing facts, until the picture begins to make sense. However, we did learn many specific lessons, which we hope will enrich future investigations.

Computer, Vol. 30, No. 2, February 1997
Copyright (c) 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved


Trends in Mobile Satellite Technology

Gary Comparetto
and
Rafols Ramirez
The Mitre Corporation

Demand for sophisticated personal communication services has changed communications satellite design. Satellites have moved closer to the Earth to improve communication speed and enable personal communication services. But in so doing, they require more computing resources and more sophisticated protocols to handle intersatellite communications.

Satellites have been used for decades, for weather forecasting, navigation, reconnaissance, and communications, among other things. Until recently, however, constraints on power, weight, and volume have made a spacecraft's computing resources so minimal that little emphasis has been placed on satellite communications and networking techniques. Now, however, this kind of support is in demand for use in today's sophisticated personal communication services.

Increased demand for these services, coupled with advances in technology, has changed approaches to satellite design. The mindset of satellite systems designers over the past 30 years has been to keep complexity on the ground. Among other reasons, this minimizes catastrophic system failures should something happen to the satellite. Today, however, economies of scale have made possible the deployment of larger satellite constellations that can tolerate the failure of one or more satellites. Satellite orbits have also moved closer to the Earth, improving communication speed and enabling PCS services. But in moving to lower orbits, these next-generation systems have increased on-board complexity as satellites must now perform multiple satellite handovers to provide continuous coverage of the Earth. Thus, increased amounts of on-board computing resources and more sophisticated communication protocols must address the resulting increase in complexity.

This article examines the trends in communications satellite deployment and the resulting requirements for network protocols that are intended to support space communications. We report the findings of a joint DoD/NASA effort to identify what parts of the seven-layer OSI protocol model can be adapted to support more sophisticated space applications; a global standard based on these findings is now under development.

Computer, Vol. 30, No. 2, February 1997
Copyright (c) 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved


Global Teleporting with Java: Toward Ubiquitous Personalized Computing

Kenneth R. Wood
Tristan Richardson
Frazer Bennett
Andy Harter
Andy Hopper

The Olivetti & Oracle Research Laboratory

The essence of mobile computing is having your personal computing environment available wherever you happen to be. Traditionally, this is achieved by physically carrying a computing device (say, a laptop or PDA) which may have some form of intermittent network connectivity, either wireless or tethered.

However, at the Olivetti and Oracle Research Laboratory, we have introduced another form of mobility in which it is the user's applications that are mobile.1 Users do not carry any computing platform but instead bring up their applications on any nearby machine exactly as they appeared when last invoked. We call this form of mobility teleporting, and it has been used continuously and fruitfully by many members of our laboratory for the past three years.

Clearly, teleporting to these machines requires that they be networked and that they provide a common interface at some level. In our case, we use our local area network (Ethernet and ATM), with X Windows serving as the common interface. When we teleport, our personal X session-with all its associated applications in their latest collective state-is transferred from one host's display to another within the lab. For example, we can walk into someone else's office and immediately call up and interact with our personal working environment on their machine, alongside any other working environments currently displayed there.

We are currently extending this idea from our LAN to the entire Internet using Java as the common interface. It is still our personal X sessions that are made mobile, but now they can appear anywhere on the Internet within any Java-enabled browser.

Although in theory the original form of teleporting could be used across the Internet, it would be restricted to hosts running an X server, and, even more problematically, would contravene the X security policy implemented by most system administrators. (The next release of X, code-named Broadway, will address some of these issues.2) But even more importantly, our current approach aims to take advantage of the rapid global proliferation of the World Wide Web. Web browsers are available in a dramatically growing range of locations, including corporate, personal, and even public-access sites. The ability to call up any personal computing environment on any such browser will enable nomadic computing on a truly global scale.

Computer, Vol. 30, No. 2, February 1997
Copyright (c) 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved


Trusting Mobile User Devices and Security Modules

Andreas Pfitzmann
Technical University of Dresden

Birgit Pfitzmann
Matthias Schunter
University of Dortmund

Michael Waidner
IBM Research Division

The market for devices like mobile phones, multifunctional watches, and personal digital assistants is growing rapidly. Most of these mobile user devices need security for their prospective electronic commerce applications.

While new technology has simplified many business and personal transactions, it has also opened the door to high-tech crime. In this article, we investigate design options for mobile user devices that are used in legally significant applications. Such applications authorize transactions: mobile phone calls, access to an office or car, electronic payment in stores, retrieval of stored medical data, and access to information on portable computers. Digital signatures-the electronic equivalent of handwritten signatures-are at the core of most of these applications and are explained briefly in the "Digital Signatures" sidebar. A trustworthy mobile user device should suit its purpose well and have credible quality.

Because mobile user devices act on someone's behalf, we use the analogy of agents to describe approaches to security. There are three types of agent trustworthiness:

A mobile user device by itself may not be able to keep data secret or uncorrupted-it may not be tamper-resistant. A tamper-resistant device is called a security module, whether the security mechanism is on a separate device or incorporated into the mobile user device itself. Such devices secure "mobile" applications and applications on stationary devices like PCs and public kiosks, if all security-relevant actions are controlled via the trustworthy mobile device.

Computer, Vol. 30, No. 2, February 1997
Copyright (c) 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved


Customizing System Software Using OO Frameworks

Nayeem Islam
IBM T.J. Watson Research Center

Today's applications have exploded in their diversity, but most operating systems are still general-purpose and inefficient. One of the benefits of using an OO approach is the ability to modify very small details of an operating system, which makes it easy to tailor the system to the application.

There has been an explosion of applications over the last few years. The behavior of modern applications, such as multimedia packages, parallel applications, and groupware, is diverse. Yet the operating systems they run on are usually optimized for an "average" application. Designing a general-purpose operating system that provides the best performance for every application is difficult.

My experience indicates that optimizing an operating system for the general case can result in mediocre performance for specialized applications, especially parallel applications. Therefore, I envision a customizable operating system built from components that will allow an optimal match between application behavior and hardware architecture. I propose an object-oriented operating system in which design frameworks support alternative implementations of key systems software services.

The most challenging part of designing a framework is to make it reconfigurable. For example, a configuration rule might improve an application's performance by replacing some of the operating system's data structures with subclasses of those structures. I propose a reconfigurable framework that implements different policies, depending on the framework subclass used.

There are other approaches to customization. Compilers can be used when there is considerable information about an application domain. Design patterns are a mechanism for specifying designs that are at a higher level than frameworks. Frameworks fall somewhere between a compiler-based approach and design patterns. In my approach, each framework separates implementation details from the subset design, which is expressed as a set of abstract classes and their interrelationships. To customize its support, the operating system uses hints supplied by the application and a monitoring system to select the subclasses that are best suited to the application. Here I illustrate the method with a parallel application.

Computer, Vol. 30, No. 2, February 1997
Copyright (c) 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved


Neural and Fuzzy Methods in Handwriting Recognition


Paul D. Gader
James M. Keller
Raghu Krishnapuram
Jung-Hsien Chiang
Magdi A.Mohamed
University of Missouri

Handwriting recognition has challenged computer scientists for years. To succeed, a computing solution must ably recognize complex character patterns and represent imprecise, commonsense knowledge about the general appearance of characters, words, and phrases.

Character recognition is a classical computing problem, dating back to neural computing's infancy. One of Frank Rosenblatt's first demonstrations on the Mark I Perceptron neurocomputer in the late 1950s involved character recognition. The Perceptron was one of the first computers based on the idea of a neural network, which is a simplified computational model of neurons in a human brain. It was the first functioning neurocomputer, and it was able to recognize a fixed-font character set. As with many artificial intelligence applications, the difficulty of handwriting recognition was greatly underestimated. Significant progress was not achieved until the late 1980s and early 1990s, when many technologies converged to enable rapid increases in recognition rates for digits, characters, and words so that reliable commercial systems could be developed.

Handwriting recognition problems are either online or offline. Online recognition systems use a pressure-sensitive pad that records the pen's pressure and velocity, which would be the case with, for example, a personal digital assistant. In offline recognition, the kind we are concerned with here, system input is a digital image of handwritten letters and numbers.

Handwriting recognition requires tools and techniques that recognize complex character patterns and represent imprecise, commonsense knowledge about the general appearance of characters, words, and phrases. Neural networks and fuzzy logic are complementary tools for solving such problems. Neural networks, which are highly nonlinear and highly interconnected for processing imprecise information, can finely approximate complicated decision boundaries. Fuzzy set methods can represent degrees of truth or belonging. Fuzzy logic, one of several fuzzy set methods, encodes imprecise knowledge and naturally maintains multiple hypotheses that result from the uncertainty and vagueness inherent in real problems. By combining the complementary strengths of neural and fuzzy approaches into a hybrid system, we can attain increased recognition capability for solving handwriting recognition problems.

This article describes the application of neural and fuzzy methods to three problems:

These problems and methods were part of research we conducted on US Postal Service data and on problems of interest to the USPS.

Computer, Vol. 30, No. 2, February 1997
Copyright (c) 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved



Computer magazine page | Publications page | IEEE Computer Society |


Send general comments and questions about the IEEE Computer Society's Web site to: webmaster@computer.org

Copyright © 1997 Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.