Articles
Name | Author | |
---|---|---|
How I see IT | Michael Wm. Denis | View article |
Plan the work or work the plan? | Tim Alden, Commercial Director, Rusada | View article |
Human interaction with modern IT systems in aircraft maintenance | Sander de Bree, Founder and General Manager, ExSyn Aviation | View article |
Networked MRO | Dr. Orkun Hasekioglu, R&D Projects Manager, Turkish Airlines Technic | View article |
Human interaction with modern IT systems in aircraft maintenance
Author: Sander de Bree, Founder and General Manager, ExSyn Aviation
SubscribeSander de Bree, founder and general manager of ExSyn Aviation Solutions asks how it contributes to continued airworthiness and what does the future hold?
Introduction
Early aircraft had very few controls and instruments for pilots to monitor conditions and performance during a flight. With technological advances, more information became available but the flight deck also became more complex, reaching maximum complexity with the Concorde flight deck, requiring (as did many aircraft at that time) three flight crew to operate the aircraft. Comparing this to a current Airbus 380 cockpit, we can already see significant simplification.
A similar development life cycle can be seen in IT systems used in aircraft maintenance. First generation systems had basic functionalities and were designed for a specific job or department within a maintenance organization (whether the airline maintenance department or any third party MRO). These systems were usually programed in DOS and with very limited visual capabilities. As technological development in the field of IT progressed, the requirement grew for systems that could embed more functionality; resulting in some of the complex, fully integrated systems used in maintenance organizations today to ensure continued airworthiness of aircraft. However, as with the Concorde flight deck, today’s systems still require employees from various departments to enter data and act on information received through the system(s) within their organization. Think of the maintenance planning process within your own organization. A system might tell you when maintenance will become due, based on the utilization of the aircraft: however, it still requires a person to tell the program which maintenance needs to be performed on the aircraft at what intervals (aircraft maintenance program) and it still requires human intervention to plan downtime, resources, material and preparation of documents.
Currently, with IT in aircraft maintenance, we can confidently expect high levels of human-to-system interaction. As IT applications are also used to help perform the tasks necessary for continued airworthiness management, one can conclude that the human-to-system interaction with systems that are applied for continuous airworthiness management have a vital role in ensuring continuous airworthiness. As an example, relate this previous statement back to the planning process within your organization and ask yourselves what would be the consequence if an individual, by mistake, entered a false interval for a particular maintenance task or a false ‘last performed’ date for a maintenance task? To further understand the influence of this human-to-system interaction on continued airworthiness, we need first to examine more closely the definitions of IT systems and continued airworthiness.
What are IT systems in aircraft maintenance?
Actually, the term ‘IT system’ is not accurate; however, it enjoys a widely accepted and understood meaning. What we are actually talking about is a two dimensional computer system in which the first dimension offers the possibility to digitally structure information and present it in various ways for different individuals to act on in order to perform a particular process. Hence we are actually talking about an Information Services (IS) system instead of an Information Technology (IT) system. The second dimension relates to users entering information into the digital system to start or continue a process. The information thus entered becomes part of all the information processed by the system and, as defined by the first dimension, presented in a structured way to other users so that they also know what to do.
For these systems to function as intended four pre-requisites exist:
1. Required information is available in a usable manner;
2. Users know what to do when certain information is presented to them;
3. Users know how to enter or feedback information to the system;
4. The system itself must be available.
The first pre-requisite is a quality and availability issue in which the priority is not whether the data is available but what is required to make the data available in such a manner that the organization can work with it. The second and third (know what to do and know how to feedback information) involve high level decision making and performance and thus can only be addressed by procedure definitions, detailed workplace instructions and continuous training.
What is continued airworthiness?
The European Aviation Safety Agency (EASA) defines continuous airworthiness as a set of eight tasks for the performance of which the owner of an aircraft is responsible (see… EASA Part-M –subpart-C regulations for continued airworthiness):
1. Pre-flight inspections;
2. Rectification to an officially recognized standard of any defect and damage affecting safe operation; taking into account for all large aircraft or aircraft used for commercial air transport, the minimum equipment list and configuration deviation list if applicable to the aircraft type;
3. The completion of all maintenance, in accordance with the approved aircraft maintenance program;
4. For all large aircraft or aircraft used for commercial air transport, an analysis of the effectiveness of the approved maintenance program;
5. The implementation of any applicable:
(i) Airworthiness directive;
(ii) Operational directive with a continuing airworthiness impact;
(iii) Continued airworthiness requirement established by the Agency;
(iv) Measures mandated by the competent authority in immediate reaction to a safety problem:
6. Completion of modifications and repairs in accordance with M.A.304;
7. The establishment of a procedure and policy for non-mandatory modifications and/or inspections in respect of all large aircraft or aircraft used for commercial air transport;
8. Maintenance check flights when necessary.
In order to be able to perform them, any aircraft owner or assigned organization must demonstrate to the regulatory bodies a capability to carry out these eight tasks by means of a procedure manual setting down how the organization will ensure that it complies with the procedures for these tasks. This procedure manual is called the Continued Airworthiness Management Exposition (CAME). Once the manual is approved by the regulatory body and the organization has proven that it performs its procedures according these processes, the organization is certified as a Continued Airworthiness Management Organization (CAMO) and becomes responsible to ensure that the aircraft they maintain and operate are airworthy and will remain airworthy throughout the period of operation.
Human factor principles applied to systems
So, having established the definitions of IT systems and continued airworthiness, let’s look at how they relate to each other. Organizations engaged in continuous airworthiness management often apply IT systems to support them in carrying out that responsibility. Consider, for example, keeping track of all the Airworthiness Directives issued for the managed aircraft, the individual status of these directives and tracking all maintenance completions. Hence, the primary objective of any information system used by a CAMO or MRO organization is to ensure compliance with the eight tasks listed above.
To perform this primary objective, the information system needs its initial data, such as the aircraft maintenance program, applicable airworthiness directives, components installed on the aircraft and/or utilization of the aircraft. This initial data is introduced to the system by either manual entry or user created data uploads and poses the first risk of human error. Secondly, aircraft specific information needs to be fed to the system; for example, defects on aircraft, performance of maintenance tasks and/or defect rectification data. Entry of this actual data is again by human input and thereby liable to incorrect data entry. Concluding, we can identify two stages with the potential for human error to affect the primary objective of an information system applied in aircraft maintenance or continuous airworthiness management:
1. Initial data entry;
2. Actual data entry.
Any error made in the entry of these primary data requirements can result in the incorrect presentation of information, leading to wrong or inadequate actions taken by staff which can conflict with ensuring continuous airworthiness of any aircraft.
To put this in perspective, let’s take the example of an Airworthiness Directive (AD). An AD is issued by an aviation regulatory body, e.g. the EASA, and contains one or more mandatory maintenance tasks or inspections to be performed on applicable aircraft within a certain time. Often these ADs are issued after a specific occurrence or incident where the cause of the occurrence or incident could also be expected on other or similar aircraft. To prove compliance with such ADs, the continuous airworthiness management organization enters this AD and its characteristics into their information system and keeps track of the performance of the AD on the applicable aircraft. However if an incorrect due date (latest possible date of compliance) or interval is entered, the information is incorrect and will not represent the actual situation anymore, so cannot serve to ensure continuous airworthiness management. In this example the AD process within a CAMO is used, however this principle of the possibility for incorrect data entry exists in any situation with any systems used by any organization.
To identify what can cause these errors in either the initial data entry or actual data entry, we have to look more closely at a widely accepted aspect of the human condition; defined as the dirty dozen – twelve factors that can lead to human error:
1. Lack of communication;
2. Complacency;
3. Lack of knowledge;
4. Distraction;
5. Lack of teamwork;
6. Fatigue;
7. Lack of resources;
8. Pressure;
9. Lack of assertiveness;
10. Stress;
11. Lack of awareness;
12. Norms.
These twelve aspects also lie at the heart of human interaction with information systems used for continuous airworthiness management or aircraft maintenance:
THE DIRTY DOZEN FACTORS THAT CAN LEAD TO HUMAN ERROR
Lack of communication
Features and functionality of information systems change regularly to keep up with the pace of efficiency demanded by organizations. Lack of communication in this respect can take place on four levels:
1. From system developer to customers, concerning release changes;
2. From administrators to users within organizations, about new features or the proper use of system functionalities;
3. From users to administrators, concerning inappropriate system behavior or failures;
4. From customers towards system developers, concerning inappropriate system behavior or failures.
Complacency
A significant portion of tasks involved in entering actual data into an information system are considered routine, e.g. receiving materials and components into stores. However it is at this critical stage that actual information concerning the materials origin or even last repair/overhaul dates is fed into the system. This, in turn, determines when the component has to be removed from the aircraft again to undergo a particular repair or overhaul. A person performing this job continuously through the day is at risk of complacency, having performed the same task many times before.
Lack of knowledge
Information systems present the information to the users who enter or update information in any system. However if the user in this case does not know what to do when certain information is presented to them or takes incorrect action, the presented information fails to serve its intended purpose. The same applies when users enter or update information in any system. If they do not know what to enter in a certain field or enter incorrect information, assuming it to be correct, the system is still fed with false data and thereby also presents false data.
Distraction
Distraction can come in many forms and particularly with tasks that take a longer time to be performed. For example, the evaluation and entry of an airworthiness directive in an information system can easily take half a day to complete, during which time many distractions can arise which temporarily draw attention away from the task at hand and open the possibility to forget to enter certain information or enter information in incorrect fields.
Lack of teamwork
When it comes to using information systems it is normal for individual users to have different levels of skills and knowledge of the application of the system. A good example can be groups of mechanics in which, often, younger members have a better understanding of how to enter certain data to the system. In practice, those younger mechanics perform all the data entries which is then perceived as good teamwork: however, it is the opposite of good teamwork. Good teamwork would be if the person with the higher level of skills assisted and explained to colleagues what to do when they have to enter information. This will raise the overall level of knowledge to a common standard instead of creating a gap between the two groups – what happens when the higher skilled person takes over all information entry.
Fatigue
More generally, fatigue can result in incorrect or incomplete information when users perform data entry while suffering from fatigue. Generally, fatigue reduces cognitive skills, risking the possibility of false or incorrect data entry.
Lack of resources
Resources, in this respect, can be quantified into two categories: 1) individuals required to enter the actual information into the system, 2) hardware required to operate the system. To maintain the flow of information from the real world to the information system, an amount of labor time has to be allocated to data entry. If the time allocated for this is insufficient for all the data to be entered, a back-log will build up, causing the information system to always be short of information and unable to present all the information required for users to perform their continued airworthiness management tasks.
To operate any information system, hardware is required, including less visible items such as servers, network cables, network connectivity, and more visible items such as keyboards, mouses, desktops, screens, scanners, etc. Any lack of such resources can slow down information entry and processing, resulting in the presentation of out of date information for continuous airworthiness management purposes.
Pressure
Users performing initial data entry or actual data entry under significant pressure (taking into account that pressure experienced varies per individual) are at risk of entering incorrect information or entering correct information in an improper manner. Either way, false information is entered to the system and presented.
Lack of assertiveness
Entry mistakes can occur and, when detected, can be corrected. However lack of assertiveness can arise when either the incorrect information is detected but not corrected in a timely manner or the only action taken is the corrective action. Taking an action to correct wrong information is a good thing; however, one should also consider how the incorrect information got into the system in the first place and seek to prevent it from happening again. This is called preventive action.
Stress
Stress is often associated with people performing their tasks under the pressure of narrow timeframes and too much work, as already explained above. However one aspect of stress often overlooked is when people are assigned to perform tasks for which they are overqualified. This is why it is never a solution to let better qualified staff undertake information entry and expect the entries to be correct simply because they are more skilled. These people are just as vulnerable to stress and error as their less qualified co-workers.
Lack of awareness
Take a department of materials buyers who all perform the same duties and tasks related to actual data entry in a system. Each department member is subject to human error whilst making data entries. Once such an error occurs and is detected, the person concerned can be informed about it to know what to do to prevent it from recurring. However, other members of the department who perform the same tasks may not be aware of the error so could repeat the mistake.
Norms
Although detailed procedures and workplace instructions exist, it is no guarantee that individuals will perform their tasks according to these norms. As soon as individuals have found a way that gets the same end result but more quickly, that method will become generally accepted and used. That by using this unofficial norm other system functions will not perform as designed is often overlooked as it is not part of the individual’s objective (e.g. to enter actual information concerning an aircraft defect without entering what is required to rectify the defect).
What can we do to mitigate these risks?
To try and mitigate the risk of human error in either initial data entry or actual data entry, one should seek to reduce the possibility of any of the above ‘dirty dozen’ contributing risks from taking place. In the matrix below various solutions are given for each contributing risk that can be undertaken by organizations themselves as well as system developers:
Contributing risk | Organizational action | System developer action |
Lack of communication |
|
|
Complacency |
|
|
Lack of knowledge |
|
|
Distraction |
|
|
Lack of teamwork |
|
|
Fatigue |
|
|
Lack of resources |
|
|
Pressure |
|
|
Lack of assertiveness |
|
|
Stress |
|
|
Lack of awareness |
|
|
Norms |
|
Impact of future developments on the human-to-system interaction
Currently many changes and developments are taking place in the field of IT, so how are these changes and developments affecting the systems applied in continuous airworthiness management and existing human-to-system interaction?
To find the answer to that question, we first need to look at what is happening in other industries and our personal lives. For several years there has been a movement to digital communities (Facebook, Twitter, LinkedIN) where interaction, collaboration and collective knowledge are key pillars alongside current technological developments taking place in the field of IT hardware. In another development, we are already able to perform surgery on patients where the doctor is on the other side of the world, operating via robotics (Howe, RD, Matsuoka, Y. ‘Robotics for Surgery’: Annual Review Biomedical Engineering. 1999, 01:213.). Or, for a more aviation related case, Unmanned Air Vehicles (UAVs) are patrolling the middle-eastern skies while their ‘pilots’ remain seated in a U.S. compound. What does all of this have to do with the future of aircraft maintenance and IT systems? Well, key in the previous two examples is something defined as, “separating the information from its artifact [purpose]” (Dhar & Sundararajan , ‘Information Technologies in Business: A Blueprint for Education and Research’, Information System Research, 2007, 18(2)). This means that information is, by definition, not tied to one person, location or machine.
The purpose in our situation would be the airline’s continued airworthiness system and separating in our situation would mean taking data away from the airline and providing it from a centrally controlled information system. Such a system would contain all related scheduled maintenance information for the particular aircraft types operated by the airline and maintenance would be controlled through this system. Part numbers, aircraft maintenance programs, airworthiness directives, service bulletins, check intervals, configurations, maintenance documentation and reliability programs… all of these will become centrally managed information. The only organization capable of controlling such a centralized information system would be the aircraft manufacturers.
We can already identify this trend in which manufacturers provide a full support package with the aircraft. Evidence can be found in the complete Boeing Edge program, the Boeing digital airline program (www.boeing.com/commercial/aviationservices/integrated-services/digital-airline.html ) and the Airbus E-solutions program. However to enable a full separation of information from what it has to do we also need to look at unscheduled maintenance. The current driver of unscheduled maintenance is the person who identifies and flags up that some component on the aircraft is no longer fit for operation and needs to be replaced. Usually this is according to a set of guidelines as stipulated in the aircraft maintenance documentation.
Now, the challenge is to separate the information required to determine whether a component is fit for operation from the person. To enable this Full Real-life Automated Communication (FRAC) between the aircraft and the central information system is required. Again, we can already see this in programs such as Embraer’s AHEAD (Aircraft Health Analysis and Diagnosis) and Boeing’s Aircraft Condition Monitoring System (ACMS) both systems within an aircraft which transmit data concerning faults and aircraft system health to ground stations. It is just a matter of time before sufficient technological developments are available to transmit the full information whether a tire needs to be replaced or a dent is detected in the fuselage of the aircraft.
Once both scheduled maintenance information and unscheduled maintenance information are fully separated from their purposes a second important aspect comes into play, namely ‘IT platforms of growing opportunities’ (Dhar & Sundararajan , ‘Information Technologies in Business: A Blueprint for Education and Research’, Information System Research, 2007, 18(2)). The growing opportunity for such a manufacturer’s central continued airworthiness information system would obviously be the integration of this central system with all other relevant information such as flight planning, material provisioning, maintenance planning, manpower planning, facility planning and equipment planning. This trend is also already taking place at an airline level when airlines choose to utilize a fully integrated system or interface their various internal and external IT systems with suppliers.
Once we have separated information from its purpose and provided an IT platform of growing opportunity we can actually eliminate two of the four above mentioned pre-requisites for the proper functioning of an information system; namely:
- People know what to do when certain information is presented to them;
- People know how to enter or feedback information to the system.
The consequences of this would be that engineering departments, planning departments, purchasing departments and troubleshooters all become obsolete as each of the activities they nowadays perform can be fully automated and performed more efficiently by automated routines and programs centrally controlled by the aircraft manufacturer. As a result, human interaction with information systems will be greatly reduced and will most likely be focused on providing initial data to any system from where on that system can function autonomously.
This will also require the regulatory bodies to rethink their approach to continued airworthiness regulations as the main drivers for ensuring continued airworthiness will come under the sole control of aircraft manufacturers through computerized systems, written by human programmers and receiving initial data inputs from people. Additionally it will make third party suppliers of any kind of aviation software obsolete as all systems will be controlled via the aircraft manufacturers.
Comments (0)
There are currently no comments about this article.
To post a comment, please login or subscribe.