University of Pennsylvania School of Engineering and Applied Science




Скачать 224.87 Kb.
НазваниеUniversity of Pennsylvania School of Engineering and Applied Science
Дата26.09.2012
Размер224.87 Kb.
ТипДокументы

University of Pennsylvania




School of Engineering and Applied Science







Aggregate Motion Synthesis and Recognition System


Elena Zoubanova

Charles Adams




Advisor: Dr. Norman I. Badler

April 25, 2002




ABSTRACT


From the beginning of human development, language was created in order to simplify and condense human experience. The description of an entity by machine methods involves describing both significant features and significant motion. The description of multiple entities adds the third task of correlating individual actions to represent the motion of the group. This project aims to capture the enhanced, cognitive power of language and use it to describe in natural language terms the real-time, macroscopic motion of multiple entities in a digital domain.


The approach taken is to consider terms that describe group motion. These terms are broken down into component attributes whose combinations reflect the intended verbs. Each attribute is represented by a mathematical expression or probability of that attribute existing within the context of the group studied. The probability of each attribute is determined and their combination is analyzed to achieve the correct aggregate term describing group motion. The result is a textual description of group motion and a visual breakdown of this motion over time.


This recognition system is extended to handle the recognition of multiple groups of aggregate entities without prior membership knowledge. Through use of a K-means clustering algorithm, features weighted by geometric location, orientation, and velocity determine the partitioning of the sample space.


Applications for this system include military surveillance, where visual data captured by cameras can be processed to produce verbal feedback to those in command. Systems can dynamically alert administrators to problem situations based upon specific recognition patterns, eliminating the necessity of constant human monitoring. Connecting the Aggregate Motion Recognition System to a real world perception-based application is the next logical step after this project.


TABLE OF CONTENTS




  1. MOTIVATION/GOALS 2

  2. INTRODUCTION 3

    1. Singular Motion 3

    2. Secondary Motion 3

    3. Aggregate Motion 4

  3. PREVIOUS WORK 5

    1. Laban 5

    2. Bindiganavale 5

    3. Zhao 5

    4. Reynolds, Musse, Thalmann, Cohen 6

  4. IMPLEMENTATION 7

    1. Theory of Operation 7

      1. Aggregate Motion Synthesis Tool 7

      2. Aggregate Motion Recognition 11

        1. Defining Aggregate Motion Features 11

        2. Mathematical Representation of Features 12

        3. Group-Based Recognition System 15

        4. Performance and Scalability 20

    2. System Specifications 22

      1. Sampling vs. Scale 22

      2. Group Exceptions 23

    3. Demonstration 23

  5. RESULTS AND CONCLUSIONS 24

  6. FUTURE RECOMMENDATIONS 25

  7. NOMENCLATURE 26

  8. REFERENCES 26

  9. BIBLIOGRAPHY 26




  1. MOTIVATION/GOALS


The world is composed of three spatial dimensions, complemented by the fourth dimension of time. The activities of entities, whether natural, mechanical, physical or sentient in nature, all exist within these dimensions. While experience is captured through multiple senses, human expression is limited to physical gestures, the written word, and verbal communication. These cognitive abilities are therefore augmented by physical devices with the goal of improving the communication of human experience. This project aims to capture the enhanced cognitive power of language and use it to describe in natural language terms the movement of large numbers of multiple entities in a digital domain. The result of this project is applicable to augmented cognition systems ranging in function from security devices to military surveillance. Within the military environment, a commander relies on the communication of reconnaissance teams to characterize and summarize multi-entity movements and situations. It would be a time saving advantage if these movements were captured and summarized in real-time without the use of human perception.


Specifically the project aims to:


  • Build computational definitions of lexical items and concepts that describe movements of oriented, point-like entities.

  • Demonstrate the lexical items via the simulation of aggregate entities.

  • Show how the computational definitions may be used to recognize significant activities of a set of entities within a larger aggregate population.

  • Produce computationally efficient methods for reporting significant aggregate actions in real-time.

  • Provide a software module that may be incorporated into multiple augmented cognition systems.




  1. INTRODUCTION


The basis for augmented cognition lies in the ability of a device to not only recognize in geometric terms the movements of an action, but also to be able to succinctly express this information. Augmented cognition is a subset of the general field of Artificial Intelligence, in which numerous paths have been taken in the attempt to emulate one or more human sensory abilities. The path that applies most to this project is the work done in sight perception, specifically related to the recognition of movement. There are 3 sub-areas within this field that, together, encompass the majority of motion.


  • Recognition of singular motion

  • Recognition of secondary motion including human gestures and emotions

  • Recognition of aggregate motion




    1. SINGULAR MOTION


An individual walking down the street, jogging on a track, or climbing a mountain are all simple examples of singular motion. This motion may be broken down into varying degrees of effort, speed, direction and time. Singular motion has been researched extensively so that currently there is the ability to track and describe single entities in both digital and physical space.


    1. SECONDARY MOTION


Secondary motion describes actions that occur along with but not exclusive to singular motion. It involves facial expressions, hand gestures, and other movements that typically have their focus based upon an internal desire, not measurable with mathematical expressions. In order to recognize these actions, researchers have used neural networks and learning algorithms to simulate the cognitive power of the brain. The resulting description is then either given in lexical terms or supplied to a digital character within the confines of an emotion generation system.



    1. AGGREGATE MOTION


The movement of multiple entities is composed of individual motion but can only be described in aggregate terms. Consider three individuals leaving an arena. Each one may perform multiple actions and head in differing directions on the way out. However, as a group they are dispersing. When taken individually, the group action is not apparent. While there has been much work done in simulating aggregate entities including crowd movements, military simulations, and animal flocking demonstrations, there has been very little work done in recognizing this motion. Inherently aggregate motion is not absolute. The approach taken in this project is to break down aggregate movement terms into features that can be recognized individually. The combination of features then describes the intended motion. The goal is not to provide a succinct explanation of motion but to give a probabilistic description of what is occurring. While the scope of this project is entirely within the digital domain, a future extension includes the possibility of combining a physical camera tracking system with the aggregate recognition module to be able to describe real-life situations.



  1. PREVIOUS WORK




    1. LABAN


Rudolf Laban made important contributions [1] to the study of human movement through his experience as, among other things, a dancer, choreographer, architect, and painter. He spent a great deal of time observing ordinary people perform everyday actions. From this research comes a vocabulary for describing motion known as Laban Movement Analysis. It is composed of five major components: Body, Space, Shape, Effort, and Relationship, which constitute a complete description of motion. Laban further defines each component in terms of its constituent features as they relate to human actions. From this work comes the majority of current knowledge about both individual and group motion.


3.2 BINDIGANAVALE


Rama Bindiganavale researched the recognition of human actions and their description using a Parameterized Action Representation. Her work at the Center for Human Modeling and Simulation (HMS) at the University of Pennsylvania has led to improved natural language capabilities of simulation systems [2].


    1. ZHAO


Liwei Zhao, also at HMS, extended the work of Laban to provide a method for recognizing and generating human gestures using an Expressive Motion Engine (EMOTE). This system applies Effort and Shape qualities to independent underlying movements, thus producing more natural synthetic gestures. From his work comes a vocabulary for describing secondary motions based upon Space, Weight, Time, Flow, and Direction [3].



    1. REYNOLDS, MUSSE, THALMANN, COHEN


In 1987 Reynolds created a particle-based simulator for flocking, herding, and schooling in which a migratory attractor is used to control the coarse movement of aggregates. However the overall motion results from a combination of coarse movements and individually simulated actions [4].


Musse and Thalmann created a crowd simulation involving varying levels of autonomy. Their scenario combines rule-based behaviors with user-controlled agents. Attractors, also known as interest points, are used to move the crowds around. In general, simulation systems like this one are limited in their control of aggregate movement. Using only attracting forces can complicate the simulation of complex movements and quickly become inefficient [5].


Other computer generated aggregate simulations treat aggregate groups as one unit. When necessary, the unit will disaggregate and perform the required action. This method is used particularly in military simulations where varying levels of group size and control are needed. Paul Cohen at the Experimental Knowledge Systems Laboratory at the University of Massachusetts, Amherst has developed a Capture the Flag simulator. In this war-game environment, aggregate entities are represented as blobs. Units can change shape to adopt different column, wedge, and “V” formations, or in response to terrain features [6].


While individual motion has been recognized and described in lexical terms, no research has been discovered to this point to indicate previous work on the recognition of groups of entities. The only scenario currently known which resembles aggregate recognition is research done in conjunction with RoboCup soccer simulations. In these simulations, commentator systems take position and orientation information among game score and other data and create higher-level conceptual units that may be used to produce natural language commentaries [7]. This system, however, is very domain specific.


4.0 IMPLEMENTATION


The approach to this project follows a logical and orderly plan. Aggregate Motion is described in lexical terms, simulated based on these terms, and recognized based on features that compose the terms. Performance is measured through surveyed human perception and the scalability of the system based on available resources. Only a human can tell the true accuracy of the system. Therefore accuracy is measured by an individual outside of the project viewing the simulation of the objects. This same person views the recognition of the objects and verify the accuracy of the textual description. Provided the system is verified by human individuals, ideally by more than one, performance is a measure of number of entities on the screen being recognized versus the time it takes to recognize them on the recognition side or the frame per second rate as calculated on both the synthesis and recognition side. The credibility to accept the results of the system rests on its implementation of well defined mathematical expressions for the features being both simulated and recognized.


4.1 THEORY OF OPERATION


The description of aggregate motion involves two separate yet necessary components:


  • Aggregate Motion Synthesis Tool

  • Aggregate Motion Recognition Tool


4.1.1 Aggregate Motion Synthesis Tool

The Aggregate Motion Synthesis Tool has been designed for the following key features:


  • Keep development process short to focus on recognition system

  • Provide feature based simulation to parallel feature based recognition

  • Allow for a scalable architecture to simulate over 50 entities at once

  • Provide for new features to be added easily


In light of the requirements, the commercial graphics package Maya™ was selected. The Aggregate Motion Synthesis Tool uses the highly customizable scripting language of Maya™ to create a simple user interface that controls numerous and more complicated dynamic forces.


In general, the two main types of simulations include dynamic-based systems and behavioral-based systems. A dynamic-based system is similar to the approach of Reynolds where dynamic forces control coarse movements. The drawbacks are that fine adjustments to group motion are difficult to make and a large set of dynamic tools may be needed to simulate complex scenarios. Behavioral systems are similar to the work of Musse and Thalmann where programmed rules define the movement. For this project, one of the most important concerns was that the synthesis tool did not become the focus of the project or consume an inordinate amount of time. For this reason, commercial dynamic tools were considered. However, most commercial tools neither provide source code nor allow simple adaptations to be made to their software. In contrast, a solely proprietary system would involve both a great deal of time and research to employ dynamic systems and create a simple user interface. A compromise is to use a highly customizable commercial package which both provides an incredibly large set of dynamic forces and an incredibly flexible scripting language to allow the user to create simple interfaces to control the forces dynamically. Alias|Wavefront’s industry standard graphics program Maya™ provides these features.


The Aggregate Motion Synthesis tool is based within the scripting language of Maya™. Shown in Figure 1, it consists of a user interface that connects a set of attractors, volume axis fields, and other dynamic elements to a series of sliders that the user can change. The sliders control forces that act upon rigid bodies. By using rigid bodies as opposed to particle systems, more control over individual object movements is provided and accurate collision detection is implemented. Although group level movements are used predominantly, varying degrees of individual randomization as well as support for multiple simultaneous aggregate groups is present. At each frame of the simulation, position, velocity and rotation data for each entity is passed across a network connection between computers, or a socket, to the Aggregate Motion Recognition module. When necessary, feedback information concerning group assignments is passed from the Recognition Module to the Simulation Tool to provide the user with a visual representation of each group. The Simulation Tool will then color each object according to which group it has been assigned to by the Recognition module. No prior group information is sent from Maya to the Recognition module.


Figure 2 depicts the design layout of the Aggregate Motion Synthesis scripting environment. Key components include the Group Manager which is responsible for keeping track of individual entities including their group and dynamic associations. Expressions created inside the Maya™ program connect internal dynamic elements to the external script and to the socket layer. Full capability exists for the user to dynamically assign and reassign individual objects to simulation groups. These groups are neither indicated to the Recognition module, nor correlated with the Recognition module’s own group determinations.














Aggregate Motion Synthesis Tool

Figure 1


4.1.2 Aggregate Motion Recognition


During the construction of the Aggregate Motion Recognition System, key obstacles overcome include:


  • Defining significant characteristic features of aggregate movements

  • Developing mathematical representations for these features

  • Recognizing aggregate activities of an arbitrary subset of entities within a larger population

  • Computing descriptions in real-time with scalability to large numbers of entities


The system is considered to be successful if it can accurately, as judged by a human observer, display textual descriptions of aggregate motion from the Synthesis Tool at the same rate at which data comes into the system. Since the groups are simulated at 15 frames per second, the Recognition System is designed to work at 15 frames per second also.


4.1.2.1 Defining Aggregate Motion Features


The theory behind meaningful and efficient aggregate motion recognition is that terms used to describe aggregate motion can be decomposed into features that represent the terms. By recognizing this set of features it is then possible to combine the features to return the correct verb. There may be multiple verbs that result from the data. Therefore a probability is also assigned to each verb so that the most likely motion term may be chosen. The user is presented with a histogram showing the likelihood of all motion verbs as well as a time-breakdown of the most likely one.


To properly construct the list of verbs to describe aggregate motion, Natural Language Researcher Karen Kipper supplied her help. True aggregate terms can only be applied exclusively to aggregate entities. A verb that is used to describe individual motion, although it may be relevant to aggregates, falls under a separate category. Instead of focusing only on aggregate verbs, this project focused on verbs that can be useful in describing group motion such as in the context of surveillance, where non-aggregate terms are helpful as well. Table 1 shows the list of verbs decided upon as well as the features that make up their composition.


Liwei Zhao’s work [3] on describing gestures for the EMOTE system is the basis for determining aggregate motion features. Fourteen attributes, based on categories of space, weight, time, flow, and direction are outlined in Zhao’s work. As not all features apply to group motion, only features based on space, time, flow, and direction are considered.


4.1.2.2 Mathematical Representation of Features


The recognition system is based upon quantitative measurements of attribute features. Therefore, concrete formulas were developed, based upon mathematical constructs, which uniquely determine individual features. These constructs were decided in such a fashion to allow for moderation in feature presence. By determining attributes based upon probabilistic expressions, the accuracy of the attribute correlation is measured. The precision of the attributes determines the corresponding precision of the verb referenced by the attribute combinations. In this way verb recognition data is organized into a time-based histogram representation. The set of mathematical expressions is shown in Table 2. The implementation of the slow / fast attributes tests the average velocity of each group relative to the predetermined threshold and presents a normalized resulting factor (between zero and one) each for the two attributes depicting the overall aggregate speed (if “fast” is one, then “slow” is zero, and the group is said to be fast). The spreading / enclosing pair is implemented in a similar fashion. The sudden attribute can be broken down into three separate measures of “suddenness” - sudden change in orientation, sudden change in velocity, and sudden dispersal (sudden movement away from the center of mass). An example of the usage of the orientation aspect is the aggregate verb “veer,” in which a sudden change in velocity is not meaningful. The verb “surge,” on the other hand, does rely on the sudden change in velocity, but not on the change in orientation. Each feature is individually normalized to values between zero and one to allow for logical and efficient verb recognition based on the combination of features.


4.1.2.3 Group-Based Recognition System


In real world scenarios it cannot always be assumed that groups will be neatly delineated for analysis. Therefore a major feature of the recognition system is that it is able to decide which entities belong to each group. Further, group membership is refined over time to be able to present the most accurate depiction possible of group motion. Once an entity has been assigned to a group, it will only be analyzed within the context of that group. Groups are not given exclusive rights to membership. In this way, an entity that is determined to not be a valid group member may be reassigned to a more suitable group. This principle of operation is the basis for the five main components of the recognition system, shown in Figure 3.


  • Data Storage

  • Group Management (K-means)

  • Group Analysis

  • Attribute/Verb Recognition

  • User Interface



Data Storage


Geometric data is given as input to the system. This includes the x and z position, x and z velocity, and y rotation of each entity. Aggregate actions occur on a macroscopic scale consisting in duration of three or more seconds. Simulation systems run at thirty frames per second, ideally. It is only necessary to sample new data roughly every ten frames to capture all relevant motion. Furthermore, attributes need a certain window of time in which a pattern can be detected. Therefore, ten samples should be sufficient to capture motion occurring within a three second window. As geometric data comes into the system, it is arranged in windows of ten samples. This window shifts forward one sample as each new set of data enters. A secondary group list data structure stores the calculations related to each group. It includes a list of IDs of the objects belonging to each group, as well as the average velocity, position, and rotation calculations for the group as a whole. A block diagram of the data storage system is shown in Figure 4.

Group Management (K-Means)


The primary responsibility of this component is to manage group membership. This is accomplished through a K-means clustering algorithm suggested by Dr. Lyle Ungar of the CIS department. The algorithm makes use of the geometric location, orientation, and velocity of the individual entities to determine the partitioning of the sample space into groups. It begins by randomly assigning each entity to one of two groups. Then the centers (average position, rotation, and velocity) of each group are calculated. Each entity’s position is compared with the average group position, and the “distance” between the two values is calculated. The same is done for orientation and velocity. Finally, the three distances are multiplied for each entity to obtain a three-dimensional distance from each group center, and group membership is modified in such a way as to minimize each entity’s distance from the center of a group. The algorithm is rerun several more times with an initial random partitioning into 3, 4, etc. groups, until the optimal number of groups is determined, based on the calculated error going below a specified threshold. Figure 5 shows a time progression of the algorithm depicting two separate groups traversing through each other and emerging on the opposite side. The colors seen are dynamically assigned based upon group feedback from the Recognition Module.


Group Analysis


The group analyzer calculates common attributes such as center of mass, average velocity, and average rotation. This data is then available to the Recognition component for use in the next step.


Attribute/Verb Recognition


This component lies at the heart of the Aggregate Motion Recognition module. Data comes in one group at a time and evaluated over the frame window (the last 10 samples). It is checked against a series of mathematical expressions relating to each feature that make up the verbs. In most cases, individual entity data is compared to group calculated data and the average results are recorded over time. The verb

recognition component applies weightings to the attributes for every verb to reflect each attribute’s contribution to the probability of a given verb. The weighted attributes are then added and normalized to a value between zero and one, which is the probability of a given verb. The verb recognizer then determines the highest probability verb for that frame window.


User Interface


The progression of each verb’s probability of occurrence is stored from the Verb Recognition component. The user interface component can then be used to view the data in a graphical format as shown in Figure 6, with the likelihood of each motion verb displayed over time and side by side with the probabilities of other verbs. The verb with the largest likelihood is shown on its own chart in order to display the most likely progression over time. A simple natural language description of the recognized events is given next to the charts. The user may switch between descriptions for each group and between views of different verbs.


4.1.2.4 Performance and Scalability


It is difficult to quantitatively measure the ability of a system to recognize human perception. Therefore the measure of performance is based on the ability of the system to:


  • Work in real-time, measured in frames per second where 15fps is real-time

  • Scale to large numbers of entities, up to 100 or more

  • Scale based on system resources and processing power


Within the scope of this project, development was done using the Maya program to synthesize aggregate motion and to generate entity data. Unfortunately, Maya is limited in the number of rigid body entities it can simulate before the computational overhead becomes too great. Significant system degradation occurs with the simulation of more than 30 entities. However, since the recognition system is not dependent on Maya, and can work with any simulation system that can provide entity motion data across a socket,

the recognition itself is not limited to 30 entities and is forecasted to scale up to at least 100 entities.


4.2 SYSTEM SPECIFICATIONS


The system is designed to work under the following specifications:


  • Format is a C++ recognition module that runs in conjunction with the dynamics based simulation tool Maya™

  • System operation is at least 15 fps, or what is commonly called real-time

  • Simulation up to 30 entities including multiple aggregate groups with results forecasted for 100 or more entities


These specifications have been developed based upon common observations in computer graphics and the current limitations of dynamic software to simulate 100 or more entities. In order for the recognition module to work efficiently within the bounds of a real-time system, it must fit specific time requirements and overcome these inherent obstacles:


  • Entities must be well-grouped to provide accurate recognition

  • Sampling, both in number of entities and number of samples, must be appropriate to the scale of the resources available

  • Group determination must still allow for the occasional lone entity which exhibits unique behavior

  • Histogram data from attributes must be efficiently compared to ideal data


4.2.1 Sampling vs. Scale


By measuring the frame rate of the system, it is possible to obtain a performance rating. If this rating falls between certain levels, the sampling space can be altered and the number of samples changed to maintain a given frame rate. The goal is to maintain 15 frames per second. If the system goes below 10 frames per second the sampling time would be increased to allow more processing time. This is done, however, at the expense of possibly losing important data between sampling times. These measures are heavily based on the number of entities being sampled.


4.2.2 Group Exceptions


Although an entity may occasionally be assigned to an improper group or wrongly removed from a group, if this entity continues to perform in a manner similar to the group it will invariably find its way back into this group. This means that once an entity is removed it is not necessarily excluded from rejoining that group. The method of determining what happens to an entity when it does not match the dynamics of the group is straight-forward. At regular intervals the K-Means clustering algorithm is rerun and individual objects are reassigned to the most appropriate group based on the current data available. This approach is especially suited for the occasional lone entity that exhibits behavior contrary to current group motion. It is treated as its own single entity group and analyzed as such within the Recognition System. If the entity proceeds to exhibit new behavior, consistent with a current group, it is reassigned to the better group the next time the K-Means algorithm is run. Being able to determine lone entities is important in many surveillance situations where the motion of the group is not as important as that of dissident individuals on the periphery.


4.3 DEMONSTRATION


The true test of the Aggregate Motion Recognition system is the live demonstration of its ability. This is done in conjunction with the Aggregate Motion Synthesis tool that was specifically written for testing purposes. Motion scenarios are played out in real-time on one machine while the recognition is carried out on another. A user is able to verify perceptually the accuracy of the program to determine the predefined movements. The user is further able to give the system a new set of groups and notice how long it takes for the groups to be refined and a proper outcome to be revealed.



  1. RESULTS AND CONCLUSIONS


Testing is limited by the maximum number of entities that the Synthesis Tool can support without loss of performance or falling below 15 frames per second. This limit is between 20 and 30 entities on an NT Pentium4 machine. The Synthesis Tool is currently able to create all necessary group simulations for verbs that need to be tested and handles multiple groups within these simulations as well. Through extension of this research, the Recognition System may be integrated with better synthesis tools to provide accurate data for the recognition of larger numbers of aggregates.


In order for the Aggregate Motion Synthesis Tool to connect with the Aggregate Motion Recognition System, there must be a path between them. This path is in the form of a network socket connection between the two systems. The connection spans not only between programs but also between computers on the network. The necessary socket is implemented for the NT environment. Therefore it is entirely possible to run the Recognition System on various machine architectures located in separate buildings, to further test the scalability of the system without the confines of physical space limitations.


The Aggregate Motion Synthesis and Recognition System has been tested with group simulations up to 30 entities. Based on the responses from independent human test subjects, the Recognition System is able to successfully determine group membership based on the available geometric data and both dynamically and accurately reassign membership during simulation. Color feedback from the Recognition System to the Simulation Tool provides a visual indication to the user that the group analysis is correct. The verb decision component utilizes the group membership data along with the normalized feature attributes to determine the most probable aggregate motion of each group. These results, displayed through dynamic histogram charts, tracking the feature and verb probabilities over time, have been verified by independent observers as well and provide further proof of system accuracy.



  1. FUTURE RECCOMENDATIONS


The future of this project is open to several paths. Foremost is bringing the Recognition tool out of the digital domain of simulations and into real world situations involving actual aggregate entities. For this, the system would need to be integrated into some type of computer perception and tracking system to relay the necessary geometric data to the Recognition Module. Currently, the module exists in two dimensions of space. The algorithms developed to describe aggregate motion features are based mainly on vector manipulations and can be extended without much difficulty to include the third dimension.


An equally important direction for the project is to continue digital simulations, yet attempt to determine the maximum number of entities able to be recognized. This brings an entirely new set of sampling algorithms to light in order to properly subdivide the digital space. Analyzing each of 10000 objects is unrealistic. However, analyzing an appropriate subset of this group, that adequately characterizes the relevant motion, is not only realistic, but is vital to the future utility of this project.


While the proof and credibility of this project lies in both the observation of human subjects and the validity of mathematical equations, a more suitable determination of accuracy may be obtained through statistical analysis of stock data. This stock data is that which must first be collected through appropriate scientific means. Yet, once collected, can serve as a data model for good motion recognition. Data from one chart may be compared against stock data by using measurements of the slope of the graph and acceptable levels of deviation. Based on the deviation, a probability of likelihood can be determined and displayed for that feature. In this way, the system can “learn” what is considered a good analysis of each verb.



  1. NOMENCLATURE




  • Aggregate: A group composed of multiple individual entities

  • Augmented Cognition: The field of research that includes the study of physical, mechanical or digital tools used for the purpose of enhancing human perception.

  • EMOTE: The Expressive Motion Engine module created by Liwei Zhao at the University of Pennsylvania.

  • Lexical: Terms relating to vocabulary or natural language.

  • Sentient: Having sense perception or consciousness.




  1. REFERENCES


[1] Chi, D., Costa, M., Zhao, L., Badler, N.I. “The EMOTE Model for Effort and Shape.” In Proc of SIGGRAPH 2000, pp. 173-182, New York, July 2000.


[2] Bindiganavale, Rama. “Building Parameterized Action Representations for Observation.” PhD thesis, University of Pennsylvania, 2000.


[3] Zhao, Liwei. “Synthesis and Acquisition of Laban Movement Analysis Qualitative Parameters for Communicative Gestures.” PhD thesis, University of Pennsylvania, 2001.


[4] Reynolds, C. “Flocks, Herds and Schools: A Distributed Behavioral Model.” In Proc. SIGGRAPH ’87, volume 21, July 1987.


[5] Musse, S.R., Thalmann, D. “Hierarchical Model for Real-time Simulation of Virtual Human Crowds.” IEEE Trans. On Visualization and Computer Graphics, 7(2):152-164, 2001.


[6] Heeringa, B., Cohen, P. “An Underlying Model for Defeat Mechanisms.” In Proceedings of the 2000 Winter Simulation Conference, pages 933-939, 2000.


[7] Andre, E., Binsted, K., Tanaka-Ishii, K., Luke, S., Herzog, G., Rist, T. “Three Robocup Simulation League Commentator Systems.” AI Magazine, 21(1):57-66, 2000.



  1. BIBLIOGRAPHY


Vitale, Jonathan. “The Linguistic Representation of Spatio-Temporal Features.” Senior Design Project, University of Pennsylvania, 2001.


Wejchert, J., Haumann, D. “Animation Aerodynamics.” Computer Graphics, volume 25, number 4, July 1991.



Похожие:

University of Pennsylvania School of Engineering and Applied Science iconUniversity of Pennsylvania School of Medicine

University of Pennsylvania School of Engineering and Applied Science iconKristen Rooks, an earth and life science teacher at Ivey Leaf School in Philadelphia, Pennsylvania

University of Pennsylvania School of Engineering and Applied Science iconInter american university of puerto rico bayamon campus school of engineering mechanical engineering department

University of Pennsylvania School of Engineering and Applied Science iconPrepared by Susan S. Grover, Eric Chason & J. R. Zepkin of William & Mary Law School, Emmeline P. Reeves of University of Richmond Law School, Robert W. Wooldridge, Jr of George Mason University Law School & C. Scott Pryor of Regent University Law School

University of Pennsylvania School of Engineering and Applied Science iconHuber and Helen Croft Endowed Professor of Engineering and Applied Science

University of Pennsylvania School of Engineering and Applied Science iconSchool of Materials Science and Engineering

University of Pennsylvania School of Engineering and Applied Science iconSchool of Engineering, University of Guelph

University of Pennsylvania School of Engineering and Applied Science iconGeological Engineering Department The Geology chair was, first, estabilished within the Department of Natural Science in Faculty of Science of Ege University, on October 4, 1961

University of Pennsylvania School of Engineering and Applied Science iconDean, School of Engineering, Rutgers, The State University of New Jersey

University of Pennsylvania School of Engineering and Applied Science iconNorthwestern University, Materials Science and Engineering Ph. D., July 1981

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница