Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection




Скачать 35.49 Kb.
НазваниеSupporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection
Дата20.12.2012
Размер35.49 Kb.
ТипДокументы
Supporting Mobile Workers in Car Production by Wearable Computing – Applied Context Detection

Carsten Matysczok11111, Iñaki Maurtua2

1UNITY AG, Lindberghring 1, 33102 Bueren, Germany, carsten.matysczok@unity.de

2Fundación Tekniker, Otaola 20 P.K. 44, 20600 EIBAR Gipuzkoa, Spain, imaurtua@tekniker.es

Abstract

Today’s requirements in production process efficiency combined with their increasing complexity represent a great challenge for staff members at all levels (from the assembly worker to the plant manager). The ultimate goal is the fine-tuning of the production process to perfectly fulfil customer orders and to keep the overall efficiency at high levels.

Through scenarios based on real situations, and tested in a real industrial environment (a Škoda car manufacturing production plant), the wearIT@work project demonstrates how wearable technology can allow an efficient, successful working environment by providing ubiquitous, mobile access to production process-related information where and when necessary: at the shop floor, at the assembly line, and at the manufacturing workstations. This allows workers at different levels to improve the training process of inexperienced workers, to improve availability of information, to speed up localisation and detection of areas to be repaired or maintained as well as to improve communication and knowledge sharing.

This paper describes the used context detection techniques, which are used to support mobile workers in car production by wearable technologies.

Keywords

Context detection, wearable computing, production, user centred design


1Introduction


Actually industrial companies act in a very difficult business environment, which can be characterized by an increasing dynamic of innovation and a reduction of the product lifetimes. In the same amount the products and the production processes become more and more complex. This results in high demands on the production processes and on the involved employees.

In this context the omnipresent availability of product related information is one of the major key success factors. This should be available every time, everywhere and for each employee. Today the majority of the process and product related data is stored in different IT systems and databases: technical information (CAD models, work plans, NC control information, etc.) are stored in PLM systems, job specific information in ERP systems; beneath this self made solutions are used in dedicated departments. Insofar the required information is basically available within the company, but the access to it is generally difficult, time intensive and limited to dedicated stationary PCs. Thus the achievement of the required information has become a very time and cost intensive process.

Nowadays the simulation of production sequences is a common tool in order to safeguard strategic and tactical decisions. Once a model for a process has been developed, various alternatives of this process can be analyzed easily. Therefore, questions like "How much more can I produce when using one more fork lifters?" can be quickly answered. Thus the modelling and analysis of material flows are of particular importance. In this context programs like eM-Plant and eM-Workplace from Tecnomatix or Taylor ED from Enterprise Dynamics are used in order to support the above mentioned planning process. However one drawback of these systems is the little intuitive user interface. Normally well-trained users are required, so that the development of complex simulation models requires a lot of time.

In this context the headwords Digital Factory and Virtual Production describe a new approach to manage these challenges. This is done by creating and analyzing computer models of the planned production systems in the early stages. Results are the identification of planning errors to save time and to avoid rising costs. Here the discrete simulation of the behaviour of the production facilities has a predominant impact. Currently those simulation tools require excellent trained users so that the creation of the complex simulation models is coupled with huge time constraints due to the traditional not very intuitive WIMP (Windows, Icons, Mouse, Pointer) interface.

In conclusion an urgent need can be identified to develop a wearable device, which allows the end users to access production related data every time and everywhere. Furthermore it becomes obvious that the current interface to the computer as well as to the software and databases has to be improved in order to allow even not trained users the information retrieval about the product and the corresponding production processes. Finally it has to be ensured that the developed solution can be easily integrated into the existing IT infrastructure, the used software and databases and the established workflows of the company.

Today’s requirements in production process efficiency combined with their increasing complexity represent a great challenge for staff members at all levels (from the assembly worker to the plant manager). The ultimate goal is the fine-tuning of the production process to perfectly fulfil customer orders and to keep the overall efficiency at high levels.

Within the wearIT@work project, the production pilot team is following its own methodology: namely an initial requirement elicitation process with real end-users, usability tests with local users close to the research team, and final validation with final end users. It must however be mentioned that it is nevertheless very difficult to get hold of the real end users in a global company where production is highly dependent upon human resources.

2Covered Production Scenarios


Through scenarios based on real situations, and tested in a real industrial environment (a Škoda car manufacturing production plant), the wearIT@work project demonstrates how wearable technology can allow an efficient, successful working environment by providing ubiquitous, mobile access to production process-related information where and when necessary: at the shop floor, at the assembly line, and at the manufacturing workstations. This allows workers at different levels to improve the training process of inexperienced workers, to improve availability of information, to speed up localisation and detection of areas to be repaired or maintained as well as to improve communication and knowledge sharing.

2.1Training


Usually the training of employees is done by imparting theoretical knowledge and practical trainings. The theoretical content is represented by paper or electronically (as PowerPoint slides, videos, etc.). Then the practical training is performed under the supervision of a trainer who analyses the single work steps, suggests improvements and indicates errors.

Compared to stationary computer systems, mobile- and wearable computing technology have seriously caught up in performance, functionality, scalability. This makes training solutions based on mobile- and wearable computing an attractive consideration for industrial organisations. In this sense, one of the objectives of this scenario was to supplement the training procedures at Škoda with a context-sensitive wearable computing solution. The wearable system was used to recognize the context of performed work, and as a result provide the trainee with the required information to adequately perform individual assembly tasks. In parallel the wearable computing system tracks the trainees’ activities and analyses them. While the workers perform their training, the supervisor is connected to all active wearable systems via his PC, and can monitor all activities.

The nature of the assembly activity itself made it necessary to design a system that does not restrict workers’ freedom of movement, while allowing them to handle all necessary components and tools. It was especially crucial to take into account that workers had to adopt many different postures during the assembly process: crouching, standing, seated, inside- and outside of the car. The chosen setup does not introduce much additional effort on the user, because accessing required information is done without or only with minimized explicit interaction with the system.

The usage of wearable computing in this scenario will result in a reduction of training time for the individual trainees, because the wearable system can provide an individual and context related advice to the trainee in real-time considering the predefined media type and the trainees learning process. Furthermore an integrated wearable computing solution will allow a decentralised training and control of the trainees so that a 100 % availability of the trainer on site is not required anymore.

The aim of the first experiment was to extend the initial findings of the experiment made with the workers of Škoda at Vrchlabi. The intention was to measure the acceptance of the system. Besides, the performance in terms of memorability (how fast workers get trained), and in terms of task completion (time consumed and errors made). All in all 40 workers were recruited and divided into two groups of 20. With the first group of workers the aim was to measure and compare their performance in doing an assembly activity while they accessed explanatory paper-based information, and when this information was accessible through wearable technology. The workers tried to perform the complete assembly task as fast as possible and only once.

By means of the second group of workers it was intended to evaluate how wearable technology can contribute to the training process. A prerequisite was that the workers had to learn how to complete the proposed activity. This involved that they had to perform the full process, until they were able to perform the activity without any kind of support. As the “short-term memory factor” was to be measured, the workers had to perform the same task one day later, without support.

In both cases the workers had to perform the experiment twice: once using paper- based support and second using one of the three interaction modalities which were proposed randomly: textile keyboard attached to the sleeve, speech commands and non-explicit or context based interaction.

According to the UCD approach we follow in the project, several experiments were carried out with a representative number of workers at local premises.

The aim of the first experiment was to extend the initial findings of the experiment made with the workers of Škoda at Vrchlabi. The intention was to measure the acceptance of the system. Besides, the performance in terms of memorability (how fast workers get trained), and in terms of task completion (time consumed and errors made). All in all 40 workers were recruited and divided into two groups of 20. With the first group of workers the aim was to measure and compare their performance in doing an assembly activity while they accessed explanatory paper-based information, and when this information was accessible through wearable technology. The workers tried to perform the complete assembly task as fast as possible and only once.

By means of the second group of workers it was intended to evaluate how wearable technology can contribute to the training process. A prerequisite was that the workers had to learn how to complete the proposed activity. This involved that they had to perform the full process, until they were able to perform the activity without any kind of support. As the “short-term memory factor” was to be measured, the workers had to perform the same task one day later, without support.


In summary the main findings were:

  • Users improved their performance when using the wearable system with implicit interaction: The assembly tasks were performed faster and with less error. In fact, it took 67 seconds less in average than when paper-based information was used which was actually the second best alternative.

  • Users did not learn faster using a wearable system. In fact, people were able to learn faster through paper-based support. Although the difference was neglectable when compared to those using context based interaction.

  • In the test performed the day after, paper-based learning performed the best, while context-based interaction performed the worst.

  • Voice recognition-based interaction was the preferred interaction modality by workers.

  • Workers preferred graphical information to text.

  • Workers found the system very useful when doing a complex task, allowing hands free access to information, avoiding dispensable movements in order to check information

2.2Final Assembly – Quality Assurance


The quality assurance is part of the quality management and assures the compliance with the quality standards. This allows the company to maintain a defined quality of the product – and not to optimize it. Within the series production this is implemented by using quality KPIs (key performance indicators). The quality management defines methods which are necessary to achieve the required product quality (determination of test procedures, sample size, communication flow in case of detected errors, training of the quality assurance staff, etc.). The quality assurance secures the keeping of those defined actions. The final testing covers the comparison of the measured values with the predefined limit value as well as the classification of the corresponding inspection result (rework, rejection).

A wearable system can support the quality assurance by recording of the performed quality checks and documenting the corresponding results. Areas of interests are: discrepancy of components on the assembly line, the alignment of the components and their fittings, control of the correctness and completeness of operation sequences as well as simplification of the work procedure. Afterwards the recorded data is forwarded to the corresponding responsibles in order be analysed and to initiate corrective actions within the final assembly. Besides, we have identified the process of reporting any detected fault, one of the possible topics where wearable technology can be applied in order to facilitate the work of operators, eliminating the need to handle some piece of paper during the full checking activity and reducing the mental workload associated to report fulfilment. In this scenario the wearable computing system will document all performed actions and quality checks automatically. Furthermore the system will safeguard that all required quality checks are performed.

3Context Detection


In the first phase of the project the aim was to create a wearable system that supported the training process in the learning island. This first prototype formed the basis to evaluate the different modalities of interaction with the assembly line workers under real conditions in the Škoda production site at Vrchlabi, Czech Republic. The front headlight assembly process was selected as a test case since this specific process represents a complex enough task which justifies the use of wearable technologies during training.

In order to track the progress of the headlight assembly, sensors were integrated on distinctive parts of the car body, on the worker, and also on the tools. This allows a detection of sub-tasks which are relevant for the different steps in the workflow. On the user’s side a RFID and an inertial sensor package reader has been attached on the back of the user’s hand. With the information coming from the RFID reader, required tools such as two cordless screwdrivers can be detected and uniquely identified. The inertial sensors provide are used to pick up the incidence of the torque limiter of the cordless screwdrivers, which occurs when a screw is properly tightened according to the chosen torque.

On the car’s side the correct positioning of the assembled car components is monitored by a set of stationary sensors mounted directly on the car body. Critical locations with permanent contact to the component, e.g. the contact surfaces behind screws, are monitored by measuring the force exerted on force sensitive resistors on these surfaces. At locations where the assembled components do not touch the car body, magnetically triggered reed switches are used. They also measure the proximity of alignment checking tools used at specific places for quality control.

As wiring up the worker would be too great an impediment for the user during his work, all data streams from the wearable sensors are transmitted using Bluetooth modules. The data streams coming from the wearable sensors on the user and the stationary sensors inside the car body are gathered and further processed by the Context Recognition Toolbox (CRN). This software framework provides a library of data processing algorithm modules frequently used for context recognition, and allows setting up a process network using these modules.

Figure 1 shows the network, which has been used for the context detection. The left part comprises the processing of force sensitive resistors and reed switches. After some signal conditioning, a threshold operator is applied to detect the status of the respective sensor. The middle thread shows the CRN tasks, which are dealing with the RFID reader. On the right side, the chain of tasks is depicted which detects the occurrence of the described torque limiter [Stiefmeier, Lombriser, Roggen, Junker, Ogris, Tröster, 2006]. A merger task brings these three streams together and sends it to the JContextAPI using a TCP/IP connection.




Figure 1: Network used for the context detection [Martua, Kirisci, Stiefmeier, Sbodio, Witt, 2007]


4Used Software Framework


The current prototypes are based on a distributed architecture using several components from the wearIT@work framework. The end-user application is written in Java. The application runs on the OQO, and it relies on the Open Wearable Computing Framework (OWCF). Specifically, the application uses the following OWCF components: Wearable User Interface (WUI) and JContextAPI. The application is modelled internally as a finite state machine: each state corresponds to a specific task of the assembly procedure for which the user is being trained; transitions are triggered by user actions, both explicit (for example voice commands) and implicit (i.e. actions that are performed as part of the assembly procedure, and that are detected and recognized automatically by the system). The application is capable of tracking the sequence of user's actions, and to monitor that such sequence corresponds to what is expected in the assembly procedure. Whenever the user performs an unexpected action, the application displays a warning message, and can contextually provide appropriate help to support the user.

The user actions are detected through a set of sensors, whose data are collected and processed by the PASSAU UNIV. context toolbox. JContextAPI is attached to the PASSAU UNIV. context toolbox [Bannach, Kunze, Lukowicz, Amft, 2006] using the TCPReaderContextProcessor, from which it receives semi-elaborated context information. JContextAPI perform some aggregation and transformation on the context information received through the TCPReaderContextProcessor, and it produces a set of context events that are meaningful and easy to handle for the application. The application simply registers a set of listeners within JContextAPI, which notify them of relevant context events representing user actions. The application can therefore react according to its internal logic: checking that the user's actions are in the expected sequence, and, if not, producing a warning message for the user.

The context events produced by JContextAPI can obviously affect the user interface of the application, which is built using the WUI component of the Open Wearable Computing Framework. Such pilot application is therefore also an example of integration between JContextAPI and the WUI toolkit. The JContextAPI acts also as interface with the Automatic Speech Recognition System (ASR) of the Open Wearable Computing Framework. The user can in fact issue some simple commands by voice (for example “help”, to request contextual help information). Whenever the ASR System recognizes a voice command, it produces some information for JContextAPI (through a dedicated TCPReaderContextProcessor), and JContextAPI generate an appropriate event for the application. The application can simply register a listener for voice commands, and can therefore react to them taking appropriate actions (for example, open the help page for the current task).

5Current Status of Implementation


The current industrial pilot developed to support the training of the assembly workers shows the trainee how to assemble the parts correctly and which tools and spare parts have to be used (see figure 2). The system additionally detects the single operations by dedicated sensors and gives an error message if parts are not assembled in the correct way and sequence.




Figure 2: Trainee mounting the headlight by using the wearable computer


The OQO technical characterises offer enough computational power to fully support the application, and it also allows for appropriate connectivity: network, Bluetooth, external VGA output. The user has a Carl Zeiss binocular look-around head mounted display [Brattke, Rottenkolber, 2005], and a Sony Ericson HBH-300 Bluetooth headset to interact with the prototype application via voice commands. It is necessary to remind the reader that the main navigation mechanism is implicit, i.e. the system detects task completion and goes on to the next step information.





Figure 3: User interface of the application


The tracking of user's actions is enabled by a special data glove that has been engineered by ETH Zurich, and on a set of sensors attached to the car body (see figure 4). The output of the sensors are collected by a stationary system (a laptop), which processes them and make them available for the recognition of user's actions. The application uses WLAN communication between the OQO and the computer where the Context Toolbox runs. The user actions are detected through a set of sensors, whose data are collected and processed by a dedicated context toolbox. The application simply registers a set of listeners, which identifies relevant context events representing user actions. The application can therefore react according to its internal logic: checking that the user's actions are in the expected sequence, and, if not, producing a warning message for the user.





Figure 4: Dedicated sensors are mounted on gloves and tools for the detection of the performed operations

The current production prototype is based on a distributed architecture using several components from the wearIT@work framework. The end-user application (henceforth also referred to as application, for brevity) is written in Java and relies on the Open Wearable Computing Framework (OWCF), which has been developed within the wearIT@work project. Specifically, the application uses the following OWCF components: Wearable User Interface (WUI) and JContextAPI.

The application user interface is based on the WUI-Toolkit, which presents in optimized way the required information to support the user during the steps of the assembly. The WUI is specifically engineered to achieve best result in presenting the output on “look around” head mounted displays, and supports very well the state-based architecture of the application: states can be associated with abstract screen structures, each of which will be rendered as the required graphical widgets (text boxes, pictures, menu items, etc.), and the navigation towards other screens. The WUI also takes care of building the best rendering of the user interface accordingly to the output device. For this, the envisioned interface capabilities are described with an abstract model independent of any specific interaction style or output medium; implementing a separation of concerns software approach.

6Conclusion and Further Work


The current results of the wearIT@work project show how wearable technology can allow an efficient, successful working environment by providing ubiquitous, mobile access to production process-related information where and when necessary. The developed wearable computing prototypes enables a context-sensitive provision of necessary information to the workers. The wearable solution was able to track and analyse the user’s actions, while providing them with actions for error handling or by recording the performed quality checks and documenting the corresponding results.

In the usage of wearable computing solutions for supporting training procedures and documenting the performed quality checks, high benefits can be expected. However, at current stage there is not yet enough experimental data to draw clear conclusions on further benefits and issues of the proposed solution. Therefore the next steps will contain the refinement of the solution according to end-users feedback, and the conduction of further tests and field studies within Škoda in order to gather enough knowledge to evaluate the prototype more comprehensively.

Acknowledgement

This work has been funded by the European Commission through IST Project wearIT@work: Empowering the mobile worker by wearable computing (No. IST-2002-2.3.2.6). The authors wish to acknowledge the Commission for their support. We also wish to acknowledge our gratitude and appreciation to all the project partners for their contribution during the development of various ideas and concepts presented in this paper.

References

D. Bannach, D., K. Kunze, P. Lukowicz, O. Amft: Distributed modular toolbox for multi-modal context ecognition, Proceedings of the Architecture of Computing Systems Conference, (2006) 99-113.

S. Brattke, B. Rottenkolber (2005): Head Mounted Display Technology for Mobile Interactive Applications, Proc. of 2nd IFAWC, VDE Verlag 7-15 (2005).

I. Martua, P. T. Kirisci, T. Stiefmeier, M. L. Sbodio, H. Witt: A Wearable Computing Prototype for supporting training activities in Automotive Production, Proceedings of the Fourth International Forum on Applied Wearable Computing, 2007.

T. Stiefmeier, C. Lombriser, D. Roggen, H. Junker, G. Ogris, G. Tröster: Event-Based Activity Tracking in Work Environments, 3rd International Forum on Applied Wearable Computing (IFAWC), Bremen, Germany, March 15 - 16, (2006).

Похожие:

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconList of winlab publications and Talks on Mobile Computing

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconC civil engineering: page 110 community education: page 116 computing – applied: page 122 computing – e-commerce: page 128 D

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconMain Background References on Mobile Computing/Wireless Networks

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection icon6229: applied logic for computing level 6 number of credits: 18

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconBelkin, S., D. R. Smulski, et al. (1996). "Oxidative stress detection with Escherichia coli harboring a katG':: lux fusion." Applied and Environmental

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconFirms that are able to attract and retain talented workers from overseas may be able to be more competitive and expand more domestically, thus creating even more demand for American high-skilled technology workers

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconSupporting collaborative problem solving: supporting collaboration and supporting problem solving

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconKeywords: Turing, Natural Philosophy, Natural Computing, Morphological Computing, Computational Universe, Info-computationalism, Computing and Philosophy

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconMoving the Computing Curriculum toward Social Computing

Supporting Mobile Workers in Car Production by Wearable Computing Applied Context Detection iconFaculty of Applied Science Applied Chemistry

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница