Sensor Data Fusion for Context-Aware Computing Using Dempster-Shafer Theory Huadong Wu



Download 7,49 Mb.
Page1/35
Date conversion07.08.2018
Size7,49 Mb.
  1   2   3   4   5   6   7   8   9   ...   35


Sensor Data Fusion for Context-Aware Computing Using Dempster-Shafer Theory
Huadong Wu
CMU-RI-TR-03-52
Submitted in partial fulfillment of the

Requirements for the degree of

Doctor of Philosophy in Robotics
Thesis Committee:

Mel Siegel, Chair

Daniel Siewiorek

Jie Yang

Wolfgang Grimm, Robert Bosch Corporation

The Robotics Institute


Carnegie Mellon University
Pittsburgh, Pennsylvania 15213

December 2003

This work is partially supported by Motorola UPR
(University Partnerships in Research) grant
Copyright © 2003 Huadong Wu

ABSTRACT
Towards having computers understand human users’ “context” information, this dissertation proposes a systematic context-sensing implementation methodology that can easily combine sensor outputs with subjective judgments. The feasibility of this idea is demonstrated via a meeting-participant’s focus-of-attention analysis case study with several simulated sensors using prerecorded experimental data and artificially generated sensor outputs distributed over a LAN network.

The methodology advocates a top-down approach: (1) For a given application, a context information structure is defined; all lower-level sensor fusion is done locally. (2) Using the context information architecture as a guide, a context sensing system with layered and modularized structure is developed using the Georgia Tech Context Toolkit system, enhanced with sensor fusion modules, as its building-blocks. (3) Higher-level context outputs are combined through “sensor fusion mediator” widgets, and the results populate the context database.

The key contribution of this thesis is introducing the Dempster-Shafer theory of evidence as a generalizable sensor fusion solution to overcome the typical context-sensing difficulties, wherein some of the available information items are subjective, sensor observations’ probability (objective chance) distribution is not known accurately, and the sensor set is dynamic in content and configuration. In the sensor fusion implementation, this method is further extended in two directions: (1) weight factors are introduced to adjust each sensor's voting influence, thus providing an “objective” sensor performance justification; and (2) when the ground truth becomes available, it is used to dynamically adjust the sensors' voting weights. The effectiveness of the improved Dempster-Shafer method is demonstrated with both the prerecorded experimental data and the simulated data.

Acknowledgements

I am very grateful to my advisor Dr. Mel Siegel for his help and support. It is really one of the luckiest things in my whole life that I can pursue my Ph.D. under the best people I can meet. I really appreciate his kindness towards me as my academic advisor as well as one of my best friends, I can hardly find enough words to express my gratitude to him.

I am also indebted to Dr. Wolfgang Grimm, who gave me lots of support and helped me to go through my hardest time in Carnegie Mellon University. I would also like to thank the other members of my thesis committee, Dr. Jie Yang and Dr. Daniel Siewiorek, for their kind help. It would be very difficult to find another person that is as famous and busy as Dr. Siewiorek, meanwhile is so nice and patient to help me to go through the details with great care. Finally, my thanks are due to Dr. Yangsheng Xu and Mr. Sevim Ablay, who gave me a precious opportunity and assisted me to pursue my goal.
TABLE OF CONTENTS



Chapter 1. Introduction and Motivation 13

1.1. Sensor, Data, and Information Fusion 13

1.2. Context-Aware Computing, or Context-Aware Human-Computer-Interaction 17

1.3. Supporting Context-aware Computing 20

1.3.1 Sensing context information 20

1.3.2 Context-aware computing research 21

1.3.3 Georgia Tech Context Toolkit 23

1.3.4 To fill in the missing part — sensor fusion 25

1.4. Outline of the Dissertation 27

Chapter 2. Context and Context-Sensing 31

2.1. Context Contents and Presentation 31

2.1.1 Context classification 31

2.1.2 Context representation 38

2.1.2.1 Basic requirements for context representation 38

2.1.2.2 Modeling context 40

2.1.2.3 Context database implementation 43

2.1.3 Managing uncertainty information 45

2.2. Context Sensing 48

2.2.1 Mapping sensory data into context information space 48

2.2.2 Sensor fusion architecture for context sensing 50

2.2.3 Sensor fusion methods for context-aware computing 55

2.2.3.1 Classical inference and Bayesian inference method 55

2.2.3.2 Dempster-Shafer Theory of Evidence method 58

2.2.3.3 Voting fusion method 61

2.2.3.4 Fuzzy logic method 62

2.2.3.5 Neural network method 64

2.3. Chapter Summary 65



Chapter 3. Implementing Context Sensing 67

3.1. System Architectural Support 67

3.1.1 System architecture style for context-aware computing 68

3.1.1.1 The blackboard-style system architecture 68

3.1.1.2 The infrastructure-style system architecture 69

3.1.1.3 The widget-style system architecture 70

3.1.2 Improving the Context Toolkit system 71

3.1.3 Benefits from the system architecture improvement 74

3.2. Sensor Fusion with Dempster-Shafer Theory 76

3.2.1 Evidence combination in Dempster-Shafer frame 77

3.2.1.1 Challenge to the Dempster-Shafer evidence combination rule 77

3.2.1.2 Yager’s and Inagaki’s modification to evidence combination rule 79

3.2.1.3 Practical solution to resolve evidence conflicts 80

3.2.2 Weighting means non-democratic voting 81

3.2.3 Dynamic weighting means constant calibrating 82

Chapter 4. Concept-Proving Experiments and Results 85

4.1. Application scenario and the sensory data 85

4.2. Building the context information architecture 87

4.3. Implementing context-sensing architecture 89

4.4. Sensor fusion effectiveness comparison 93

4.5. Conclusions from the experiments 96

4.5.1 Experiments testing methodology and system architecture 96

4.5.2 Experiments testing sensor fusion algorithm effectiveness 97



Chapter 5. Adaptation of Dempster-Shafer Sensor Fusion Method 99

5.1. Methodology and theoretical explanation 99

5.1.1 Objective and subjective Bayesian statistics 99

5.1.2 Different explanations of the Dempster-Shafer theory 101

5.1.3 Where is Dempster-Shafer method more suitable? 104

5.2. Experiments with artificially generated data 107

5.2.1 Design of simulated experiments 107

5.2.2 Simulation data and data processing 109

5.2.3 Experiments and their result analysis 112

5.2.3.1 Case I: sensors are approximately of the same precision () 112

5.2.3.2 Case II: sensors are conspicuously of different precision (, , and ) 115

Chapter 6. Conclusion and Future Work 119

6.1. Methodology and Implementation Summary 119

6.1.1 Context sensing 119

6.1.2 Context sensing implementation 121

6.2. Dissertation contributions 123

6.3. Future research suggestions 127

Appendix 129

Appendix A System Software Development 129

.A.1. Tools and Environments Setup 129

.A.2. Software architecture background 130

.A.3. Middleware approach to build context-aware computing systems 131

.A.4. System architecture for network-based computing 132

.A.5. Event-based/agent architecture versus context-aware computing 134

.A.6. Software package development 135

.A.7. Dempster-Shafer algorithm implementation 137

Appendix B Concept-Proving Demonstration Experiments 141

.B.1. Specifying context information architecture 141

.B.2. Implementing and demonstrating the focus-of-attention fusion case-study application 154

Appendix C Sensor fusion API description 159

.C.1. Class BeliefInterval 159

.C.2. Class DSfusing 160

.C.3. private interface I_Comparator 168

.C.4. Class SortableVector 168

.C.5. Class Comparator implements I_Comparator 169

.C.6. Class Evidence 170

.C.7. Class Hypothesis 172

.C.8. Class HypothesisSet 174

References 181




LIST OF FIGURES



Figure 1. Georgia Tech Context Toolkit component and context architecture 24

Figure 2. Towards context understanding: this dissertation provides sensor fusion support 28

Figure 3. The user-centered scheme to group context information 34

Figure 4. Context model: stage, users, objects, and events 42

Figure 5. The two basic semantic objects in a context information model 42

Figure 6. Transfer context information model into relational database 44

Figure 7. Sensor fusion process model: (a) direct data fusion, (b) feature level fusion, and (c) declaration level fusion [53]. 52

Figure 8. Confidence interval is between “belief” and “plausibility” 59

Figure 9. System architecture to support sensor fusion in context-aware computing 73

Figure 10. Information layered structure and sensor fusion support 75

Figure 11. Meeting-participant’s focus-of-attention analysis experimental settings seen from the central omni-camera 86

Figure 12. Concept-demonstration system architecture implementation using the focus-of-attention scenario as central application 89

Figure 13. System architecture concept-proving demonstration experiment screen shot, using prerecorded meeting-participant's focus-of-attention data, 91

Figure 14. Sensor fusion effects in terms of correcting visual sensor misclassification 96

Figure 15. Simulation of a focus-of-attention estimation scenario 108

Figure 16. Sensor fusion effects in simulation with sensors being of the same precision but having different drift effects 115

Figure 17. Sensor fusion effects in simulation with sensors being of significant different precision with different drift effects 117

Figure 18. Sensor fusion techniques applicable to context-sensing 121

Figure 19. Original Context Toolkit System Software Package 135

Figure 20. Context sensing software package structure based on the Context Toolkit system 136

Figure 21. Dempster-Shafer Theory of Evidence programming implementation 138




LIST OF TABLES



Table 1. A human user's physical environmental context description 35

Table 2. A human user's own activity context description 36

Table 3. A human user's state context description 36

Table 4. Context uncertainty management user-identification example 46

Table 5. Context sensing achievable with commonly used sensors 49

Table 6. Property highlight of commonly implemented sensor fusion architectural patterns 54

Table 7. Sensor fusion method comparison with prerecorded focus-of-attention experimental data 94

Table 8. Situations where Bayesian or Dempster-Shafer method is more suitable for sensor fusion 105

Table 9. Comparison of sensor fusion algorithm effectiveness using simulated sensory data (sensor noise ) 113

Table 10. Comparison of sensor fusion algorithm effectiveness using simulated sensory data (sensor noise: ,, and ) 116

Table 11. Comparison of sensor fusion options for context-aware computing 124

Table 12. Semantic object specifications for context information modeling 142

Table 13. Context database table entries description and constraints 148






  1. Introduction and Motivation


    1. Sensor, Data, and Information Fusion

Sensing and gathering environmental information is the first step and one of the most fundamental tasks in building intelligent Human-Computer-Interaction (HCI) systems. With the “intelligence” expectation increasing, using multiple sensors is the only way to obtain the required breadth of information, and fusing the outputs from multiple sensors is often the only way to obtain the required depth of information when a single sensing modality is inadequate [157].. However, different sensors may use different physical principles, cover different information space, generate data in different formats at different updating rates, and the sensor-generated information may have different resolution, accuracy, and reliability properties. Thus, how to properly fuse the sensed information pieces from various sources is the key to produce the required information. This is what sensor fusion stands for.

Techniques for multi-sensor data fusion are drawn from a diverse set of more traditional disciplines including digital signal processing, statistical estimation, control theory, artificial intelligence, and classic numerical methods [48].. The characteristics of these commonly used techniques will be examined in Section 2.2.3 in order to find a generalizable sensor fusion solution for a wide range of context-aware computing applications. The following is only a brief discussion about sensor fusion related terminology.

Sensor fusion technology was originally developed in the domain of military applications research and robotics ([20]., [53]., [54]., [60].). Since it is an interdisciplinary technology independently growing out of various applications research, its terminology has not reached a universal agreement yet. Generally speaking, the terms “sensor fusion”, “sensor data fusion”, “multi-sensor data fusion”, “data fusion”, and “information fusion” have been used in various publications without much discrimination ([47]., [52]., [61].). The terminology confusion is well illustrated by a figure in Chapter 2 of [53]., which shows a Venn Diagram that purports to represent the relationship among these related terms.

It seems that popular usage has shifted from “sensor fusion” to “data fusion”, and it is now moving towards “information fusion1” ([51]., [59]., http://www.inforfusion.org for International Society of Information Fusion). Following robotics convention, however, the term “sensor fusion” is used in this dissertation. In most cases, all the other terms can also be interchangeably applied without causing misunderstanding.

As the lack of unifying terminology across application-specific boundaries had been a barrier historically even within U.S. military applications [48]., the U.S. Department of Defense (DoD) Joint Directors of Laboratories (JDL) Data Fusion Working Group was established in 1986 to improve communications among military researchers and system developers. The Group worked out a general data fusion model and a Data Fusion Lexicon ([53]. Section 1.6).

The initial JDL Data Fusion Lexicon defined data fusion as ([53]. Chapter 2): “A process dealing with the association, correlation, and combination of data and information from single and multiple sources to achieve refined position and identity estimates, and complete and timely assessments of situations and threats, and their significance. The process is characterized by continuous refinements of its estimates and assessments, and the evaluation of the need for additional sources, or modification of the process itself, to achieve improved results.”

Despite the fact that the concept and implication of this definition can be generalized to encompass a very broad range of application situations, it is also obvious that it is greatly influenced by the patterns of thinking in the military application domain. Revisions of the definition from U.S. DoD and many others choose slightly different words, but basically they all refer to the same theme ([23]., [52].).

Some other definitions, however, are more inclusive. For example, in Mahler’s definition: “data fusion or information fusion is the problem of refining and pooling multisenor, multitarget data so as to: 1) obtain improved estimates of target numbers, identities, and locations, 2) intelligently allocate sensors, 3) infer tactical situations, and 4) determine proper responses.” [56].

Trying to include more generalized situations, Steinberg et al. [61]. suggest a formal definition as, “data fusion is the process of combining data or information to estimate or predict entity states.”

The work involved in developing this dissertation is in favor of this more inclusive definition class. A formal sensor fusion definition would be: the information processing that manages sensors and collects sensory or other relevant data from multiple sources or from a single source over a period of time and produces (and manages its distribution of) knowledge that is otherwise not obtainable, or that is more accurate or more reliable than the individual origins.

The general data fusion model proposed by JDL Data Fusion Group initially included four differentiating process levels. They are: [Level 1] Object Assessment: estimation and prediction of entity states; [Level 2] Situation Assessment: estimation and prediction of relations among entities; [Level 3] Impact Assessment (Threats): estimation and prediction of effects on situations of planned or estimated actions by the participants; and [Level 4] Process Refinement: adaptive data acquisition and processing to support mission objectives ([52]., [61]., [62].).

In 1999, the JDL revised the model to incorporate an additional level, [Level 0] Sub-Object Data Assessment: estimation and prediction of signal- or object-observable states, in order to describe the preprocessing at signal level in further detail.

This data fusion model has gained great popularity. However, there also exists the same concern that this model definition is heavily influenced by military operations thinking patterns. As previous described, Steinberg et al. tried to broaden its implication using more general terms [61].. Meanwhile Blasch et al. added a 5th level, [Level 5] User Refinement: adaptive determination of who queries information and who has access to information, to include knowledge management. [62]..

This dissertation incorporates this 5-level data fusion model. The ultimate goal of sensor data fusion is to collect and process lower-level signals to extract higher-level knowledge that reveals the “best truth” — in terms of fulfilling the system’s mission or providing functional utilities for the targeted applications.

Put it in a simple way, for a specified application, the purpose of sensor fusion is to sense the environmental information of its users or the users’ own activity information. From deploying suitable sensors to detect the interested phenomena or parameters, extracting necessary features, combining these features (information pieces), up to dealing with the information storage and distribution, this research studies try to recognize the similarities among different sensor fusion realizations across different situations at different abstraction levels. It aims to form a generalizable solution for building a system architecture that can support fulfilling the tasks of the most popular situations of context-aware human-computer-interaction applications.

The thesis title “Sensor Fusion for Context-Aware Computing” emphasizes that the research focus is on the sensor fusion methodology and its corresponding architectural support, rather than on the context-aware applications themselves.



    1. Context-Aware Computing, or Context-Aware Human-Computer-Interaction

In terms of providing services to human users, an ordinary service person is much smarter than the smartest computer-controlled machines of today, because the former can react appropriately to the circumstances of the people being served, that is, a human secretary or waiter extensively and implicitly uses “situational” or “context” information. The ultimate goal of context-aware computing research is to have computer-controlled systems behave like smart human secretaries, waiters, or other service personnel.

The idea of “context-aware computing” is to have computers understand the real world so that human-computer interactions can happen at a much higher abstraction level [84]., hence to make the interactions much more pleasant or transparent to human users.

The following are some imagined application scenarios that can illustrate what context-aware computing implies and how a “context-aware computing” technology enabled system can enhance service quality or improve human users’ personal productivity.


  • Example 1: Suppose the user of a context-aware computing system is new to a place (a city, a mall, a tradeshow etc.), and would like to have the system collect the relevant information and give him/her a tour. A good context-aware computing technology enabled system should somehow be able to know its user’s available time and his/her interests and preferences. It would tentatively plan a tour for the user, get his/her feedback, and then guide him/her point-to-point through the visiting. During the tour, the system should be able to sense the user’s emotional states, to guide his/her focus of attention, and to respond to the state changes. According to the user’s emotional status change, it would consequently adjust the content description details, adaptively include or omit some contents, and control the content delivery pace – in a manner that a smart human tour guide does naturally.

  • Example 2: Today’s cellular phone with pager function now has the functions of connecting phone calls and delivering emails. Context-aware computing research is trying to make it smarter ([50]., [85].). A context-aware computing enabled personal information management system should further know what its user is doing and adjust its own behavior accordingly. Some examples of good behaviors are: only the time-sensitive e-mails should be delivered to the cellular phone set; the text-voice function should be automatically triggered to read out the email contents whenever it is appropriate; it should be able to sort out priority of the calls and use appropriate ringing methods, etc.

  • Example 3: A context-aware computing enabled home service system should be able to detect the activities of its occupants: the room temperature should be adjusted not only based on the time of a day but also based on the occupants’ current activities and preferences. Some other potentially additional functions are: it can tell whether young children or senior adults are doing well, it can notice and remind the occupants regarding important evens, and it may recognize and remember where things have been misplaced, etc.

The term “context-aware” was first introduced by Schilit et al ([2]., [3].) in 1994 to address the ubiquitous computing mode where a mobile user’s applications can discover and react to changes in the environment they are situated. The three important aspects of context that he identified are “where you are”, “who you are with”, and “what resources are nearby”.

While the basic idea of context-aware computing may be easily understood using some examples like the previously described ones, Dey and Abowd [1]. tried to formally define it as: “a system is context-aware if it uses2 context to provide relevant information and/or services to the user, where relevancy depends on the user’s task.” They further suggested formally classifying context-aware computing into three categories: (1) presentation of information and services to a user; (2) automatic execution of services for a user; and (3) tagging of context to information for later retrieval.

Of Dey and Abowd’s three categories of context-aware computing applications, it is obvious that they all deal with serving human users. The third category is slightly different from the first two in that the context information is used as a means to aid human memory [10].. Nevertheless, they are all for human-computer interactions. Schmidt even described using context as one of the most basic benefits that enables the change from explicit human-computer interaction mode to implicit human-computer interaction mode [66].. Hence, perhaps the better term to describe this concept would be context-aware human-computer interaction, or context-aware HCI. However, since the terminology “context-aware computing” is well established, it will be used throughout this dissertation.

Roughly speaking, the three terms “context-aware computing”, “ubiquitous computing”, and “pervasive computing” have been interchangeably used in the computer science research domain without much discrimination. The term “pervasive computing” is slightly more popular among the many researchers who often cite Mark Weiser’s paper “The Computer of 21 Century” [81]. as the seminal document ([43]., [76]., [77]., [78].) that triggered the current global interest in developing the concept. Many other researchers, however, regard context-aware as a subset of pervasive computing, or, only the necessary means to realize the pervasive computing concept ([70]., [73].).

This dissertation builds on the Context Toolkit system ([22]., described in Section 1.3.3), so it follows the terminology used there. Thus the term “context-aware computing” is used interchangeably with “pervasive computing” and “ubiquitous computing” concept.


    1. Supporting Context-aware Computing

      1. Sensing context information

For a context-aware computing system to work, obviously it must be able to sense and manage context information. However, the term “context” can imply an extremely broad range of concepts. Almost any available information at the time of interaction can be regarded as a piece of context potentially relevant to the human-computer interaction.

To give a formal definition, Dey et al. [1]. did a survey of existing research work regarded as context-aware computing and suggested: “Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves.”



The following list is just an example of some commonly considered information contents based on Korkea-aho’s [5]. enumeration of context:

  • Identity of the user

  • Spatial information: locations, orientation, speed, acceleration, object relationship in physical space, etc.

  • Temporal information: time of the day, date, season of the year, etc.

  • Environmental information: temperature, humidity, air quality, light, noise pattern and level, etc.

  • Social situation: whom the user is with, the nearby people, family relationships, people that are accessible, etc.

  • Nearby Resources: accessible devices, hosts, other facilities, etc.

  • Resource usability: battery capacity, display resolution, network connectivity, communication bandwidth and cost, etc.

  • Physiological measurements: blood pressure, heart rate, respiration rate, muscle activities, tone of voice, etc.

  • User’s physical activity: talking, reading, walking, running, driving a car, etc.

  • User’s emotional status: preferences, mood, focus of attention, etc.

  • Schedules and agendas, conventional rules, policies, etc.

Since context is such a broad concept and the ultimate goal of context-aware computing is to provide a ubiquitous computation model to ordinary human users’ daily usage [42]., sensing context is very different from traditional sensing and sensor fusion tasks in the following aspects:

  • The goal is to provide computational services to human users anywhere at anytime. Thus context-sensing needs to be implemented in mobile environments using whatever sensors that are available, i.e., the sensor set is highly dynamic;

  • Context-aware computing is for human computer interactions, therefore the context-sensing capabilities need to be commensurate with human perception capabilities;

  • The context information is for context-aware computing applications ― running programs in the computers ― as well as for human users’ reference directly. The humans often prefer a descriptive semantic format over the numerical parameter of most sensors’ outputs; and

  • For the system to be used for ordinary users’ daily life, the sensors being used cannot be very expensive.

In addition, compared with traditional desktop computing applications, context-aware computing is a new computation mode that is much more complicated ([70]., [76]., [77]., [78].). For such a system to function correctly, it needs system architectural support, which is much different from that of the traditional computer systems ([66]., [79].).

      1. Context-aware computing research

The current context-aware computing research and development is still at its infant stage [76].: typically, in most published research projects, only one or very few pieces of context information are sensed and used.

The most successfully used contextual information pieces thus far are human user’s identity and location information [148].. Among those early successful location-aware computing research projects, some commonly referenced are the Active Badge (1992-1998) of Olivetti Research Ltd. (now AT&T Labs, Cambridge, U.K.) ([6]., [88].) the Xerox’s ParcTAB (1992-1993) [7]., the Georgia Institute of Technology’s CyberGuide (1995-1996) [8]., and the University of Kent at Canterbury’s Fieldwork or Stick-e (1996-1998) ([9]., [10].).

In early ― and in many recent ― “Active Map” or “Tour Guide” application, the user’s current location is the only context information being used: the vicinity map is updated or the nearby point-of-interests are displayed blindly ― meaning that the system does not know its user’s actual focus of attention, preference, intention or interest at that time ([90]., [91]., [92]., [94].).

More advanced context-aware computing research integrates more context information. Examples include Microsoft’s EasyLiving, Georgia Institute of Technology’s Aware Home, and Carnegie Mellon’s Aura project.

The Microsoft EasyLiving project (http://research.microsoft.com/easyliving/) aims at developing prototype architecture and the necessary technologies for intelligent office environments, where a group of dynamically collected smart devices can automatically collaborate to provide human users a convenient interface to personalized information and services. By the end of year 2001, the EasyLiving system could handle a single room with about a dozen dynamically available devices, and one to three users can using the facility simultaneously ([11]., [12]., [80].).

The Aware Home Research Initiative (http://www.cc.gatech.edu/fce/ahri/) in the Georgia Institute of Technology creates a living laboratory for research in ubiquitous computing for daily activities. The application target is to provide services to home life of a typical small family or a couple. Many sub research projects have been conducted since then (http://www.cc.gatech.edu/fce/ahri/projects/index.html). Generally speaking, these ongoing projects and experiments are still case-studies with carefully controlled environmental conditions ([13]., [27]., [86].). The initial goal of having hundreds of sensors ubiquitous deployed has apparently been cut back to a few dozens at most, indicating the practical difficulty of implementing this sort of advanced applications.

Carnegie Mellon’s Aura Consortium consists of a series of ubiquitous computing research projects in Human-Computer-Interaction, wearable computers, intelligent networking, and software composition etc. (http://www.cs.cmu.edu/~aura/). Emphasizing minimizing distractions to users’ attention, the research goal is to provide each user with an invisible “halo” of computing and information services that persist regardless of location so that the users can interact with their computing environments in terms of high-level tasks (http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/aura/services/www/). The Aura deployment has focused on two main areas. One is a set of contextual information services, which provide information about entities of interest such as devices, people, physical spaces, and networks. The other is to develop applications that exploit the Aura infrastructure, such as predicting users’ next activities and preparing relevant equipment for the next task ([43]., [107].).

Towards integrating more context information pieces and more complex context, one important step is to modularize (and eventually standardize) the system building blocks or components. Many context-ware computing research projects address or include this topic in course of providing system architecture support ([44]., [67]., [71]., [73]., [79].). The Context Toolkit system (http://www.cc.gatech.edu/fce/contexttoolkit/) developed in the Georgia Institute of Technology GVU (Graphics, Visualization & Usability) Center is regarded quite successful in supporting modularizing system components. It effectively separates concerns of context from its usage [22].. This dissertation seeks to expand the Context Toolkit system with context-sensing and context information management modules.



      1. Georgia Tech Context Toolkit

It is now a common practice for applications to use standard I/O device driver interfaces and GUI functions, though it took many years to achieve this standardization. Based on this observation, Dey et al. ([17]., [21]., [22].) developed the Context Toolkit system to facilitate building context-aware applications with standard components. A
Application

Application

Application

Application

Application

Application

Application

Application




Sever

Sever


Sever

Sever


Sever

Sever


Sever

Sever



Interpreter

Interpreter

Interpreter

Interpreter

Interpreter

Interpreter

Interpreter

Interpreter



s illustrated in Figure 1 from [16]., the Context Toolkit consists of three basic building blocks: context widgets, context aggregators and context interpreters.

Figure 1. Georgia Tech Context Toolkit component and context architecture


Figure 1 also shows the relationship between sample context components: arrows indicate data flow. The context components are intended to be persistent: instantiated independently of each other in separate threads or on separate computing devices, they are executed all the time.

A Context Widget is a software wrapper or agent of a sensor. It provides a piece of context through a uniform interface to components or applications that use the context. Using Widgets hides the details of the under-lying context-sensing mechanism(s), and allows the system to treat implicit and explicit input in the same manner. Widgets maintain a persistent record of their sensed context, to be polled or subscribed by context-consuming components in the system.

A Context Interpreter is a software agent for abstracting or interpreting context. For example, a Context Interpreter may transfer a location context in form of latitude and longitude to the form of a street name. A more complex interpreter may take context from multiple Widgets in a conference room to infer that a meeting is taking place.

A Context Aggregator is a software agent that collects context from multiple sources, usually for comprehensive information about a particular entity (person, place, or object). Aggregation facilitates the access of distributed context about a single entity.

The Context Toolkit promotes hiding the details of the lower-level sensors’ implementation from context extraction, thus allowing the underlying sensors to be replaced or substituted without affecting the rest of the system. However, since the “context” usage is still far away from having any established conventions ― in contrast to the highly-evolved computer GUI and keyboard/mouse usage ― to actually insulate sensors’ implementation from context sensing is now very difficult when many sensors are deployed3.


      1. To fill in the missing part — sensor fusion

Dey, the Context Toolkit author, listed seven benefits (or seven requirements that have to be fulfilled [22].) to use the toolkit. They are: (1) applications can specify what context they require to the system; (2) separation of concerns and context handling; (3) context interpretations convert single or multiple pieces of context into higher-lever information; (4) transparent distributed communications; (5) constant availability of context acquisition; (6) context widgets and aggregators automatically store context they acquired; (7) the Discovers support resource discovery.

From the context-sensing point of view, because sensor fusion support was not among its original design goals, the Context Toolkit system also has the following limitations:



  • No intrinsic support for context uncertainty indication: by default, any context information was regarded as correct and unambiguous

  • No direct sensor fusion support: an application needs to query or subscribe to all available sensor widgets that can provide the context contents of interest, and it is up to the application itself to decide whether there is any overlap or conflict between any two pieces of the sensed context

  • Difficult to scale up: when the sensor pool is large, it is not easy for an application to track all sensors and to make all possible context-providers collaborate.

A context-aware computing application system typically has many sensors in mobile status: old sensors may disappear and new sensors may appear at any time. For sensors to work in such a dynamical configuration, sensor fusion support is necessary. The direct motivation of this dissertation is to provide the Context Toolkit system with the missing part ― the sensor fusion support component.

The sensor fusion component obviously needs to provide the sensor fusion functionality. It also needs to perform related administrative functions, such as tracking the currently available sensors and their specifications, collecting relevant data, integrating and maintaining the system information flow, etc. The developed infrastructure has a long-term goal to provide a generalizable sensor fusion solution with regard to two goals. First, the system configuration and architecture building blocks can be easily reused for different context-sensing tasks in the same context-aware application system or in different context-aware application systems. Second, the developed sensor fusion algorithm is applicable to as many context-sensing tasks as possible, and its implementation is modularized for reuse.

To achieve this goal, a systematic sensor fusion methodology is proposed and demonstrated. This top-down approach suggests a two-step process: the first step is to define a context information structure for a given context-aware application; using it as a guideline, the second step is to design the information flow inside the sensor fusion architecture.

Dempster-Shafer evidence theory is chosen as the first core module to implement the sensor fusion algorithm. This approach is shown to provide a sensor fusion performance advantage over previous approaches, e.g., Bayesian Inference approach, because it can better imitate human uncertainty-handling and reasoning process.



Compared to the original Context-Toolkit, the two-step with Dempster-Shafer theory implementation approach further separates the concerns regarding the context-sensing process from the usage of the sensed context. This work demonstrates the thesis that synergistic interaction between sensor fusion and context information facilitates the sensor fusion processes, which in turn provide more context information with higher accuracy.

    1. Outline of the Dissertation

Figure 2 illustrates the key features of this dissertation. Sensor fusion in traditional context-aware computing systems was done in an ad hoc manner, so context-use was highly coupled with the context-sensing sensors; the Context Toolkit system promotes separating them by wrapping the sensors with widgets. This dissertation further standardizes the context-sensing process by specifying a context information architecture and by adding sensor fusion supporting modules. It is intended that this approach will advance context-aware systems toward imitating human sensing and understanding context in ways consistent with human intuition.

Figure 2. Towards context understanding: this dissertation provides sensor fusion support


This dissertation is organized as follows:

Chapter 1. introduces the background, goals, and terminology of context-aware computing and the new sensor fusion challenges. It outlines the goal of this dissertation: to provide a generalizable sensor fusion framework based on the Context Toolkit system.

Chapter 2. first discusses context information classification and representation issues, presenting preliminary research results in context classification; then regarding context sensing, typical sensor fusion methods are analyzed in seeking for a most generalizable method. The context classification, representation, and the sensing technology discussion ultimately leads to proposing a top-down methodology pursued throughout this dissertation.

Chapter 3. addresses realization of the top-down methodology from two perspectives: software architecture development and Dempster-Shafer algorithm implementation. The system architecture discussion analyzes the architectural style characteristics of typical context-aware computing systems, and justifies why the proposed system is an improvement over existing systems. The Dempster-Shafer algorithm research describes the existing typical conflict-handling proposals and introduces a weighting scheme that mitigates conflicts.

Chapter 4. describes the experiments and results of a concept demonstration system. It illustrates how the proposed top-down methodology was used in a meeting-participant’s focus-of-attention analysis case study. The outcome demonstrates the concept feasibility, and quantitative sensor fusion results compare the effectiveness of different sensor fusion algorithms.

Chapter 5. further studies the adaptation of the Dempster-Shafer sensor fusion method theoretically and numerically. The discussion of different interpretations of Dempster-Shafer formalism helps to provide a deeper insight into the Dempster-Shafer theory, and based on that, it explains in what situations the Dempster-Shafer method would be more suitable than the most commonly used classical Bayesian inference framework. Using artificially generated sensory simulation data, the numerical results compare how different sensor fusion algorithms perform.

Chapter 6. summarizes the dissertation results in terms of system architecture improvement and in terms of general sensor fusion advancement in context-aware computing. The dissertation contributions are elucidated in two areas: (1) the system development work adds a sensor fusion module to the Context Toolkit system, and this promotes further separating concerns of context information usage from context-sensing implementation; (2) introducing Dempster-Shafer Evidence Theory into context-aware computing research area solves otherwise very difficult problems, and extending the Dempster-Shafer method practically enhances sensor fusion performance. Finally further research suggestions are given.

The Appendix mainly serves as the technical report documentation for the Motorola University Partnership in Research program, which in part supported this work. It comprises three parts. After an introduction of software architectural support issues for context-aware computing, the first part describes the development process of the system software package. Using the focus-of-attention case study as an example, the second part illustrates how to build a context information database and how to use the software package that was developed. The third part is the Dempster-Shafer sensor fusion module API description manual; this software module is relatively independent, and can be used in other applications with the help of this manual.



  1. Context and Context-Sensing


Sensing context for context-aware computing faces a two-fold problem: to represent context properly, and to map sensor outputs into this context representation. To address this two-fold problem, context taxonomy, representation, and its uncertainty management issues are addressed in the first part of this chapter. The second part of the chapter addresses development of a generalizable sensor fusion method.

    1. Context Contents and Presentation

      1. Context classification

As described in subsection 1.3.1, the trend of context-aware computing development is to integrate more context information. To provide a reference for better managing the context elements, a taxonomy of context information needs to be developed. At the beginning stage such a taxonomy reference can help to identify the most cost-effective context sensing technology for implementation, in the long run it will help to properly choose a course to scale the system up (integrating more context information) towards developing a human-like context-aware system.

There is not much research published on classification of general context information. The reason is perhaps that, at the current development stage, even the-state-of-the-art context-aware computing research does not seriously think of a general context model [109]..

The first attempt towards a generalizable context classification is Schmidt et al’s scheme [29]., which suggests organizing context into two general categories ― “human factors” and “physical environment”, with three subcategories each. The scheme suggests defining “history” as an additional dimension of context information, orthogonal to the “human factors” and the “physical environment” categories.

The human factors’ category is organized into three subcategories: (1) information on the users (e.g., instance knowledge of habits, mental state, or physiological characteristics, etc.); (2) information on their social environment (e.g., proximity of other users, their social relationship, collaborative tasks, etc.); and (3) information on their tasks (e.g., goal-directed activities, higher-level abstraction about general goals of the users, etc.).

The physical environment category also has three subcategories. They are: (1) location information (absolute location, e.g., GPS-coordinates; or relative location, e.g., inside a car, etc.); (2) infrastructure information (e.g., surrounding computing and communication equipment, etc.); and (3) physical conditions information (e.g., level of noise, brightness, vibration, outside temperature, room lighting, etc.).

Not directly addressing context classification, but attacking a related problem from an implementation perspective, Dey et al [1]. suggested categorizing general contexts into a two-tier system. The primary tier has four pieces of information (“location”, “identity”, “time”, and “activity”) to characterize the situation of a particular entity. The secondary tier is considered as contexts to be indexed by the primary tier contexts.

If a candidate list of all feasible context-sensing technologies and their usage situations is required to find and organize “killer” applications for implementation, or if a reference is required to help find and organize context-sensing technology development directions to pursue, neither Schmidt’s nor Dey’s classification can help very much. A more detailed classification scheme is needed.

Intuitively, a systematic way to classify a user’s context information would be to explore its contents in physical and temporal dimensions, and to attempt classification in terms of intention, mood, etc. The ultimate organizing scheme, well beyond the practical scope of this dissertation, might be something like the one described by Takeo Kanade in CMU Robotics Seminar of November 1, 2002. Kanade suggested a model to describe human functions and behaviors within the computer’s virtual world. His model comprises three classes of functions: physio-anatomical, motion-mechanical, and psycho-cognitive. The physio-anatomical sub-model sees the human body as a living entity that regulates and controls various parts, organs and circulatory systems. It describes the shapes, material properties, physiological parameters, and their relationships to internal and external stimulations. The motion-mechanical sub-model sees the human as a machine that can walk and run, move and manipulate objects. It concerns kinematic, dynamic, and behavioral analysis of human motions. Finally, the psycho-cognitive sub-model deals with human’s psychological and cognitive behaviors as they interact with events, other people, and environments.

The true difficulty in context classification is not how to define some orthogonal dimensions that can categorize the context contents, but rather its origin is in the nature of the far-reaching implications of context information itself. For example, at different abstraction levels, a user’s activity context description can be “fingers tapping the keyboard”, “typing on a computer keyboard”, “typing a report”, “preparing a report for a task”, “working”, etc. The number of ways to describe an event or object is unlimited, and there is not a standard or even a guideline regarding granularity of context information.

Because classification of general context information is not feasible at the current research stage of context-aware computing, a pragmatic approach is adopted. The adopted context classification scheme is a user-centered approach that groups context into three categories: (1) the physical environment around the user; (2) the user’s own activity; and (3) the user’s physiological states. The relationship of the three context categories is illustrated in Figure 3, and the context elements are listed in Table 1, Table 2, and Table 3.



As discussed in Section 1.2, context-aware computing is for human-computer interaction purposes, so using the user-centered scheme to classify context is a natural evolution in context information management. Notice that because the state-of-the-art context-understanding level varies in different aspects of human user context, the boundaries among the three context classification tables may not be crisply clear. However, the three classification tables per se should be valuable for providing a reference to context-aware computing application developers, because they include all the context information elements that have be used or even mentioned in all the literature the author have found.

Figure 3. The user-centered scheme to group context information


Referring to Table 1, environmental context description includes the following aspects of information:

  • Location related information in terms of large area, absolute physical position, or geographic and climate implication etc.; when it changes, it means the user is traveling.

  • Proximity related information in terms of small area, which includes where the user is in at the current time, what the environment means to humans, and what facilities and devices he/she can reach and possibly use; when the proximity is changing, it means that the user is walking or running away from one place to another.

  • Time related information which can be either in the absolute sense such as time of the day (it implies darkness of the sky, etc.), season in a year (it implies feelings humans would have in out-door activities, etc.); or in the sense of the expected activities (including other people’s activities) such as time for work, for lunch, for a vacation etc.

  1   2   3   4   5   6   7   8   9   ...   35


The database is protected by copyright ©sckool.org 2016
send message

    Main page