Game changer

Jack Hsu brings years of gaming industry experience to his latest challenge – developing aircraft inspection AR software. Howard Slutsken met him

On a tree-lined business campus, not far from Vancouver International Airport (YVR), Jack Hsu is on a bumpy journey to teach an iPad app how to recognise an aircraft. He isn’t simply developing enthusiast software that can tell the difference, say, between an Airbus A320 and a Boeing 737 on final approach. Instead, he’s focused on sophisticated algorithms with foundations in artificial intelligence and machine learning, to create augmented-reality (AR) software to assist engineers and mechanics with regular aircraft inspections.

“The idea was to be able to lift up an iPad, look at the plane, and then see dots on the screen that represent defects – like fuselage dents from ramp operations – on the plane itself,” Hsu told AIR International. Sound simple? It's not.

The road to Boeing
Hsu is a Vancouver native, and has been senior manager, software development, at Boeing’s Vancouver division for the past four and a half years.

Langley Airport
This DC-3 at the Canadian Museum of Flight in Langley BC was used for the initial tests of the augmented-reality application
Howard Slutsken

Like many technologists in Vancouver, he worked at local computer gaming giant Electronic Arts. “I was the NHL [video game] development director the first year there. And, along with everybody who worked on the development of NHL 2003, I’m in the game,” he chuckled.

While Boeing’s roots in Canada go back more than a century, its facility near YVR became part of the company just 20 years ago. AeroInfo – the company focused on developing software for aircraft maintenance – is now integrated into Boeing Global Services (BGS) and has been rebranded as Boeing Vancouver. The office is part of the airframer’s after-sales digital group that sells flight planning and maintenance software.

“Coming into Boeing, I actually didn’t know much about maintenance, which is really the core competency of a lot of the folks here. I started to learn that aircraft inspections are an ongoing requirement in maintenance – it’s costly and it takes up a lot of time,” said Hsu.

“As I learned more about the products that we develop and support here, I began thinking that augmented reality could potentially help support that kind of maintenance. We needed to explore it.”

Augmented vs virtual
Augmented reality and virtual reality (VR) are technologies that enable users to interact with both real and virtual worlds. Virtual reality is the more immersive, with users wearing a fully contained visual display, such as the Oculus Quest or Sony PlayStation VR. Hand controllers or gloves give the user virtual physical control over elements in a digital world.

Gaming and science applications are popular in VR. For example, NASA’s Jet Propulsion Laboratory has developed a series of virtual visits to Mars.

Augmented reality, meanwhile, overlays information on a user’s field of view in the real-world environment – seen through a smartphone camera or a see-through display, perhaps.

In an aviation application of AR, pilots flying an aircraft equipped with a flight deck head-up display (HUD) can see data from the aircraft’s flight instruments while keeping their attention on what's going on outside the aircraft.

Aviation enthusiasts are likely well-versed in the AR feature of Flightradar24’s flight-tracking app. Hold a smartphone up to the sky, and the app will show the location and flight data of aircraft within the camera’s field of view – information that’s updated on the screen as the phone is moved.

Hsu saw the potential of linking AR with Boeing’s Maintenance Turn Time software and iPad application. On its website, Boeing describes Turn Time as a real-time system to track and document maintenance work. “It’s an application that mechanics carry that helps them track defects on the surface of an aircraft,” explained Hsu.

“You can see a 3D model of the aircraft, and it’ll show you dots of existing defects, where they’re located. A mechanic will be looking at the plane, and they’ll say, ‘that defect has already been logged, but I haven’t seen that one before,’ and then they log it, by indicating where on the aircraft it’s located.”

When picking up a rental car, filling out a form to log existing dents or scratches is essentially a pen-and-paper version of what the software tracks for an aircraft operator. But instead of mechanics simply interacting on the iPad with a digital model of an aircraft, Hsu wanted to shift Turn Time into the real world. “Why don’t you just lift up the iPad and look at the plane, and then see the dots that represent the defects on the plane itself,” he wondered. “That way, it’s much more direct.”

Making the magic happen
The Augmented Reality for Maintenance and Inspection project was established in 2020, with backing from the Vancouver-based Digital Technology Supercluster at the University of British Columbia, and support from the partnership of Boeing Vancouver, Unity Technologies, and Simon Fraser University.

Synthetic
To properly teach the app’s algorithm, thousands of synthetic images of the DC-3 were created, with variations in lighting and livery Boeing Vancouver

To properly teach the app’s algorithm, thousands of synthetic images of the DC-3 were created, with variations in lighting and livery
Boeing Vancouver

“We decided to split up the project,” explained Hsu. “Boeing worked on the iPad app and our partners at Unity – who are AR experts – worked with the Microsoft HoloLens, which is AR hardware that’s worn like a pair of glasses.”

The teams started with Turn Time’s virtual 3D model of an aircraft, then placed the dots indicating the location of defects on a mesh outline of the model – in the same way that computer-generated imagery (CGI) for movies is created in layers that can be manipulated and positioned.

The 3D model, the reference mesh and the dots would next be linked to the image seen by the iPad or HoloLens camera, so that the virtual model is anchored – perfectly overlaid with the actual aircraft.

“It has to be anchored correctly. It isn’t just that it is anchored when the iPad is locked down in one position, it has to stay anchored wherever you move the device,” said Hsu. “If we anchor it properly and we make the mesh disappear, all you see is the dots. This is the magic.”

But Hsu and the team found that while the iPad has the technology to anchor a virtual object in the real world, its performance is hugely dependent on the working environment and quality of the camera image. In a relatively small indoor location such as an office, Hsu explained, there are multiple feature points that an app could easily track, such as furniture or window corners. “But when you have a plane, that’s a shiny surface, a large environment, a big object, and it’s curved – it doesn’t work,” he stated.

“You can go to the nose of the aircraft and anchor it, then walk around, come back, and the plane is no longer anchored. It’s because the sensors on the iPad can’t see enough feature points – it gets confused and can't track the plane.”

Hsu’s team had multiple conversations with Apple and confirmed this limitation of the iPad, and the team at Unity found similar issues with the HoloLens. “We had to decide how we were going to deal with this problem,” said Hsu.

Machine learning
In the field of Artificial Intelligence (AI) research and development, robotics engineers have been using a technology called ‘pose estimation’ to determine the position of an object in an image, and then infer the location of the camera relative to the object. “In our case, you look at the aircraft, and just by looking at it you can figure out where you are, relative to it. Once you know that, you can take the virtual 3D model and position it in the appropriate spot,” explained Hsu.

Synthetic 2
Researchers from Simon Fraser University placed QR codes on the DC3 to help capture the images and positional data Boeing Vancouver

However, to make it all work, the software must be able to identify that it’s looking at an aircraft, exactly where on the aircraft it’s looking, and how far from the aircraft it is located. To do that, the application must be taught about aircraft, and that falls into the science of ‘machine learning’.

“The way I think of it – there’s a black box with a software algorithm, and you feed it lots of images of cats and other animals and tell it ‘That’s a cat’ or ‘That’s not a cat’,” said Hsu. “Eventually, if you feed it enough images, and then you show it a new image, it can answer when you ask, ‘Is this a cat?’ That’s how machine learning works.”

He added, “One interesting thing about machine learning is that no one knows what goes on inside the box. We don’t really know how it knows. We just know that it does a good job.”

To add pose estimation into the algorithm’s learning process, each image’s data must include the reference points of the camera’s location, distance and orientation from the target, in multiple axes. “To feed this algorithm, we had to collect lots and lots of images of a plane,” said Hsu.

Foiled by the weather
Driven by the restrictions dictated by the pandemic, the team finally found a suitable target aircraft – a venerable Douglas DC-3 – in the Canadian Museum of Flight’s collection, parked on the ramp at Langley Regional Airport east of Vancouver.

Researchers from Simon Fraser University (SFU) were tasked with capturing the images and positional data, and feeding them into the pose estimation algorithm they had developed.

“What they did was put QR codes onto the aircraft. You needed to have more than one visible in the picture, so it could triangulate its position,” said Hsu.

A long day of work and 3,000 images later, the team’s initial response to the results was one of excitement. However, this was soon to turn to disappointment.

“The software was trying to do the anchoring, and it didn’t do a particularly good job. This was a bit surprising to us and the team from SFU. In tests they had done, they took a bunch of images and found it worked really well.”

Analysis determined that the algorithm was confused because, compared to the tests, the clouds were different in the images on the day of the photography.

“The lighting is different. The shadows are different. The reflections are different. And depending on where they moved [the camera], there were viewpoints it had never seen before. We thought, this isn’t going to be good,” admitted Hsu.

Not enough cats
Data scientists have recognised that it’s impractical – and incredibly time-consuming – to feed an algorithm the umpteen thousands of images the software needs to learn.

LM Nasa Orion
Augmented reality is bringing speed and accuracy to Lockheed Martin’s work on NASA’s Orion spacecraft Lockheed Martin

For Hsu and the team, the 3,000 images of the DC-3 clearly weren’t enough. “The more data that you have, the better. If you can feed a million cat pictures into the algorithm, it’s going to do a better job than if you feed it ten [images of] cats. But collecting more data is hard,” he explained.

One solution is synthetic data generation – a burgeoning area in machine learning. It’s the concept of creating synthetic images – like a cat in a video game – that are detailed and with enough variation in size and colour for the algorithm to accept and understand.

“This is where my video game background came in,” said Hsu. Working with a graphics artist at Unity Technologies, they created images of the DC-3 with different liveries in different environments, backgrounds and lighting conditions, and moved the camera to create thousands of different viewpoints.

“They randomised everything. Then, using their rendering engine – which they provide for half of all video games in the world – they were able to generate 100,000 synthetic images in four to five hours.”

And to teach the algorithm properly, the synthetic images must be mixed with real images, otherwise the software won’t recognise real cats.

More work needed
Hsu was encouraged by this approach, but “we ran out of time and budget, and the project ended. I’m in the process of getting more funding from other sources, and basically everyone has agreed that they’re going to fund another phase to continue the research.”

He’s thinking about new approaches to try, including using an iPad Pro’s LIDAR – light detection and ranging – sensor to create digital maps of the aircraft’s surface.

Despite the challenges of the journey, Hsu is optimistic and undeterred. “To create a simple application that mechanics will want to use, we’re going to have to do it a different way. And that’s why we’re spending so much time trying to come up with techniques,” he said. “What we’re doing is research.”

AR in space
Augmented reality is an integral part of NASA’s effort to take astronauts to the moon – and beyond – in the design and manufacture of Lockheed Martin’s Orion spacecraft.

Smart Glasses
AR-enabled, ergonomically designed smart glasses help SATS staff with ramp loading at Singapore Changi Airport SATS

“There are complex processes that our teams have to manage within the space industry,” explained Shelley Peterson, systems engineer senior staff, Lockheed Martin.

Technicians using AR wearable devices can access step-by-step instructions of an assembly process or can easily call up detailed overlays of component placement layouts that ‘float’ over the actual spacecraft.

“We have 57,000 fasteners on an Orion spacecraft and use AR to know where to place them. What would normally take two to three technicians multiple shifts, we can now complete with one technician in two and a half hours,” said Peterson. “The improved accuracy of component placement with AR has been the biggest benefit, eliminating the need to constantly double-check work instructions.”

According to Peterson, technicians are accurately completing installs within one-tenth of an inch – easily bettering the defined placement tolerances that are set at between a quarter and half an inch.

The aerospace giant is also expanding the use of AR to satellite production. In one particular case, technicians were able to reduce the time required by 90% when compared to using more conventional paper-based work instructions.

“By integrating visual capabilities and design and build processes, we can provide complete, accurate instructions for our technicians,” said Dan Driscoll, vice president of innovation and resilient digital environments, Lockheed Martin.

Augmented reality and cargo
At Singapore Changi Airport, ground services provider SATS has deployed AR in its ramp loading operations for baggage and cargo. SATS’s ramp-loading officers (RLO) wear AR-enabled, ergonomically designed smart glasses that feature a monocular display, an onboard camera and wireless connectivity for data communications.   As cargo is prepared for loading, an RLO on the ramp first uses a tablet to identify the cargo or baggage containers and their corresponding loading positions in the aircraft – along with digital loading instructions for the aircraft. The loading instructions are also displayed on the smart glasses, to help the RLO visualise the loading sequence.   The loading process is monitored in real time at the SATS ramp control centre, giving oversight to the loading procedure and assisting the RLOs with instructions on cargo with special handling requirements.   Once the cargo or baggage container is in the hold, the RLO in the aircraft uses the smart glasses to scan the position of the container, using optical recognition technology. This is to confirm that the container is loaded in its correct position, which is a critical double-check for weight and balance calculations for an aircraft.   Confirmation pictures are then taken, using the smart glasses, to show that the cargo or baggage container is properly secured with locks. The images are then kept and stored. SATS is now exploring the addition of video analytics to the AR processing, for the identification of non-standard shaped cargo, using the smart glasses’ camera function.