Aircraft IT MRO – May / June 2016

Subscribe
Aircraft IT MRO – May / June 2016 Cover

Articles

Name Author
Case Study: MRO Software Implementation Case Study at Aircraft Maintenance Services Australia (AMSA) Clanci Ferguson, Business Development Leader, Aircraft Maintenance Services Australia, and Gerry Croarkin, Business Development Director, Rusada View article
HAECO’s journey to Mobility Louis Wong, Enterprise Architecture Manager, HAECO View article
Column: How I see IT – Go paperless: write that down! Paul Saunders, Solution Manager, Flatirons Solutions View article
Look – no hands! Voice Control for MRO Jeff Pike, Head of Strategy and Marketing, IFS Aerospace and Defense Center of Excellence View article
Paperless Maintenance: The shape of things to come Paul Saunders, Global Product Manager, Flatirons Solutions View article

Paperless Maintenance: The shape of things to come

Author: Paul Saunders, Global Product Manager, Flatirons Solutions

Subscribe
Paperless Maintenance: The shape of things to come

Paul Saunders, Global Product Manager at Flatirons Solutions Speculates on the future of paperless maintenance and the technologies that will make it happen

Readers will be familiar with the concept of paperless processes and will have read plenty about the software being used to support a paperless work environment today. So, rather than add more to that part of your MRO / M&E library of knowledge, I’d like to look ahead to the software of future years with no particular deadline – it could be five or ten years’ time, or more – but starting from the interesting consumer technology coming out now which is likely to shape our future within maintenance and engineering. In the same way that, in the past five years, mobility has caused a big shift in the way in which we work, I believe that there is some really interesting emerging technology that is going to have a similar impact on us going forward. So, that’s the focus of this article.


Plus ça change, plus c’est la même chose

Jean-Baptiste Alphonse Karr’s 1849 epigram above (English translation: ‘the more it changes, the more it’s the same thing’) is a truism that I recently observed in a book and film about the near future. I read the book, ‘The Martian’ by Andy Weir on holiday last year and found it quite an interesting look at the future. It’s about a mission to Mars set in 2035 on which there are some malfunctions so that the people involved with engineering and flight Ops, with the rest of the team, have to work together to solve a whole set of problems. It’s not too dissimilar to some of the challenges with which readers will be familiar where several departments have to work together to solve problems. I found the movie (adapted for the screen from the book) fascinating because, although it’s set in 2035, there’s nothing alien about the technology or what they’re doing. It was interesting, for a technophile like me, to look at some of the ideas about what future user interfaces and applications might look like. It was also interesting that the engineer was working with a couple of laptops, moving data between devices but using a Panasonic Toughbook! So it occurred to me that, in 2035, if the movie makers are right, we’ll still be doing very similar things to what we are doing today – hence the sub-title.


Where we are today

Coming back to the present, if we look through our own history of maintenance and engineering, some of the tools we use today – smartphones and tablets – might be slightly different from those we had in the past but, in fact, we’re not doing anything much different from what we were doing fifteen or twenty years ago. So, where are we today with paperless maintenance and engineering? What are the characteristics that typify current models and processes for a paperless environment?

•    Mostly device specific;
•    Mostly back office system specific;
•    Keyboard & Mouse oriented UI;
•    Not touch optimized;
•    Word and 2D drawing oriented;
•    Lacking in rich media;
•    Mimicking paper procedures;
•    Mostly unimaginative design;
•    Mostly legacy backend;
•    Lacking offline capability;
•    Entirely 2D screen based.

As I’ve already said, we’re not actually doing much different with paperless maintenance and mobility from what we were doing during the time when we used paper. Everything still mimics the paper procedures that preceded today’s systems. We use ‘Word’ and 2D illustrations or user interfaces and we’re not really taking advantage of the next generation media; we’re not doing anything more than two-dimensional screen based interactions. There are some exceptions and some very good mobility solutions but that’s where we are today; not a million miles away from what we were doing before the advent of paperless working and mobility.


Looking into the future

In this article I’d like to ‘mock up’ what I think user interfaces are going to look like at some point in the future. As to naming a date: a lot of that will depend on rates of adoption. Certainly within the next five years we might see solutions such as are illustrated below but they’re not likely to be widely available for, perhaps, ten or fifteen years. That said, readers will recognize some of the technologies

We start in a hangar environment but a lot of the solutions would be applicable for line maintenance, hangar maintenance, heavy maintenance or even component maintenance.

More than just hands free
With the user interface illustrated, you can see the equipment in the background because I’ve based the idea on augmented reality, a technology with which readers will be familiar from a consumer perspective. Also, this solution that I’m suggesting would be device agnostic, so you might still using a laptop with a camera, a tablet or a mobile device; but, more specifically, I think we’re going to be moving towards the wearable technology such as Microsoft HoloLens (to be launched in the next year or so) where the user is wearing a headset with the displays in front of them and contextually related to where they are looking. What’s interesting about HoloLens is that if you’re designing applications for it, then they’ll automatically work on a device such as the Microsoft Surface tablet. So, whereas in the past, people have asked; ‘What device should I select: iPad, Windows, Android?’ in the future that kind of question will be largely irrelevant because, as readers will recognize, there are different devices applicable for different applications. We’ll need to be a lot more prepared to support solutions across multiple devices. Whereas a Microsoft HoloLens might be relevant for one person, it might be over-engineering for someone else doing a different job where a tablet or smartphone might be more appropriate. I expect there to be a lot more multiple device support in the future.

That said, the user interface is unlikely to be alien to us and we’d recognize and be able to use things such as notifications plus we’ll still want to access other enterprise applications from the same platform… email, browser or other communication applications. But I expect in this environment that we’ll be doing a lot less manual input and I’d expect to see context being driven to the user, based on either where they are (recognition of their surroundings) or interaction with the Internet of Things (IoT). So we’re in a particular environment, the device has geolocation capability so that it knows where the user is and the asset at which they are looking (this is the aircraft, this is the serial number…), can integrate with the ERP or MRO system and can determine the current status of the asset in front of it; and the user is able to interact with those system and get additional context.

By tapping on the defect button, the user will be able to drill down into that content to further interact and reveal four defects open for this aircraft which can be further detailed to show, in this illustration, more information about the defect with the engine and with the defect details ‘floating’ and ‘pointing’ to the engine in question.

Context always available
Moving around the aircraft, the context follows the user allowing them to investigate and drill down even further and use lots of other additional context which is driven to the viewing area based on what the user is interested in. So, in the same way that today it’s possible to attach photos to a work pack, that will be possible simply by looking, taking a photo; plus it will be possible to interact with task cards or access authoritative content such as an IPC (illustrated parts catalog) for part numbers to associate with the particular defect.

The interesting thing here, using 3D augmented reality in this way (and it’s something that will be possible with the HoloLens), is that we’ll be able to pin specific pieces of content to the surroundings. So say, for example, the user wants to be able to open a manual, and wants a clear view of what they’re working on, they will still have a hands-free maintenance capability and could pin the IPC to the fuselage to allow them a clear view of, say, the engine they’re working on; but if they want to reference the manual, they can look back up to the fuselage and it will be there. This is the kind of technology to expect.

Conferring within the team
The other thing I expect to be able to do, in tight integration with the MRO and ERP systems, is communication. For this illustration, I’ve linked it to Skype but it could as easily be an embedded communication capability within the application. In this case, and I’m sure this will be familiar to readers, the user could want support from technical services or maintenance control teams who might phone or use Skype or FaceTime to communicate with colleagues. In this case, because the device has a built in camera, and the user can see it on their screen, the person with whom they’re interacting will be able to see exactly what the main user can see, in the same way that one can do now with Skype or FaceTime, but will also be able to annotate the main user’s surroundings to draw their attention to a particular problem or aspect of what the user is looking at to help the user understand what has to be done. This is the type of automatic integration that I expect to see available; with communications that allow the user to receive authoritative information from colleagues which can also be recorded and logged within the work pack. I think there have been a couple of incidents in recent years where people have had interactions either with technical services or with the OEMs and have received authoritative information as part of that interaction but that has not found its way into the recording or the work pack. I believe that, in the future, this type of interaction will be automatically stored in the work pack.

Not just what to do but also how to do it
The next aspect of this possible future to highlight is instructional content. Today, there is a blurring of the lines between what is technical content and what is instructional content and that blurring is likely to continue into the future. So if, for example, there is a procedure with which the user is not 100 percent familiar, then they will be able to access instructional videos through a device, from the system to help with the procedure. Looking ahead to using 3D object recognition and 3D visualization it is likely that that video content will be more than just static video and will be dynamically associated with the procedure being undertaken with an overlay of 3D components superimposed onto the surroundings which, as the user moves and interacts with their environment, should change accordingly.

Signing off and releasing an aircraft to service
Again, readers will be familiar with electronic sign-off on completed jobs and that also will, I think, be widespread in the future. There are a number of ways to sign off and release an aircraft back into service. Today there is a user ID and PIN number which is universally accepted as a signature. There is also a wet signature, hand-written on screen by the user, which is approved by many authorities and often seen in the field. In the future I think there will be a couple of other capabilities; one thing that we at Flatirons Solutions have been working on is where the user does a pen stroke or a virtual pen stroke with a finger; there will be some kind of object recognition of that by associating that signature and validation of that back to the user’s user profile. So, in the same way as when today one signs for a credit card, the person in the retail outlet does a visual check against the signature on the card itself, I expect that to be done automatically by the system for validation. The other option that I think is going to be used in the future is a biometric signature – fingerprint, retinal scan or some other means – for associating and matching who has actually carried out this maintenance with a real person.

Gesture recognition and voice control
One aspect that we haven’t yet addressed is how the user might be interacting with this software, this user interface. As you can see, with the illustrations above, I‘ve not included any type of keyboard or mouse because I envisage this having a user interaction which is pretty much in its infancy at the moment but I expect there to be a lot more maturity in gesture recognition. Readers will be familiar with technology such as the X-Box Connect and other consumer applications such as BMW cars where gestures are used to input commands and I’d expect to see that in the MRO environment. The other possibility whose scope I expect to see extended is voice control. Again, readers will be familiar with different aspects of voice control such as Apple’s Siri and its Google/Android equivalent ‘Now’ and I expect voice control to play a bigger part in the future.

The parts that get used
The final box on the right of the above illustration is for components, interacting with the IPC to record which components have been taken off and which have been fitted. I would anticipate that a lot of that work will be automated via smart assets and the IoT to ensure recognition of what part has been removed and what part fitted. Certainly I’d expect there to continue to be that same physical process that we have today where stores will bring physical parts to the engineer but for some components, I’d expect to see a lot more 3D printing of parts. The A350 has around one thousand parts that can be made with 3D Printing (sometimes known as additive manufacturing); and I’d expect in the future to see a lot more 3D printed parts capability for items that are currently produced using traditional manufacturing methods and processes. So, it could be in the same way that today we see PMA (parts manufacturer approval) Parts, SFAR 36 (Special Federal Aviation regulation 36) repairs and DER (designated engineering representative) repairs being developed, we will see 3D printed parts available to be printed out on the line station without having to utilize a supply chain.

It’s all very exciting if it happens; so what’s stopping us? As I said, we’re probably five, ten or fifteen years away from seeing technologies like these employed in a ubiquitous manner. But there’s nothing that I’ve mentioned above that isn’t possible right now. We’re all familiar with Geolocation; with Augmented, Virtual & Mixed Reality; Offline & Asynchronous Capability; Voice & Gesture Control… all of those things can be seen in and are available from consumer Apps today. What we don’t see today is all of those things brought together into one application and certainly we don’t see them in the MRO environment. But five years ago I was talking and writing about the advent of the iPad and of mobility which are today realities with all their implementation challenges and not yet being at full maturity.

In the same way, the emerging technologies of today (Markerless Object Recognition, Internet of Things & Smart Assets, 3D Printing & 3D Models, Cross Platform Support, Video Conferencing & Virtual Whiteboards, Mediated Instructional Content, Innovation in User Centric Design…) will have a big impact in the future. The technology might look different in ten or fifteen years but the fundamentals of MRO practice and models will remain the same: to ensure safe reliable and efficient aircraft to serve the fare paying traveler using the best technology available.


Contributor’s Details

Paul Saunders
Solution Marketing
Flatirons Solutions


Paul is a trusted technology specialist who has been working for and advising MROs, airline operators, OEMs, and software vendors since 1998.

He has unparalleled expertise in aviation software design and mobility, having worked on apps used by pilots and engineers all over the world.

Paul is often called upon for speaking and writing engagements and is a regular contributor to AircraftIT MRO & Operations eJournals, Aviation Week, and other publications. When it comes to the adoption of emerging technology in aerospace, particularly with regards to mobility, Paul is a heavy weight visionary and geek.

Paul joined the Flatirons team in September 2013 and serves in the aerospace solution marketing team. Paul is currently based in the UK.


FLATIRONS SOLUTIONS

Flatirons Solutions (www.flatironssolutions.com) provides consulting, technology, and outsourcing for content lifecycle management. For more than 20 years, it has served global Fortune 1000 customers in aerospace, automotive, electronics, financial services, government, healthcare, and publishing. Its customer engagements help organizations efficiently deliver the right information, at the right time, to the right people by leveraging structured content and digital media — Turning Content into Knowledge®. Flatirons operates from offices in Asia, Europe, and the United States and is headquartered in Irvine, California.

Comments (0)

There are currently no comments about this article.

To post a comment, please login or subscribe.