Episode 13

full
Published on:

28th Nov 2024

AI in aerospace - the CAA’s response

Automation and autonomy already play a key role in many aspects of aerospace, including autopilot and air traffic management, but the future is enabled by artificial intelligence (AI), and has the potential to impact every part of the aerospace sector.

In this episode, we’re joined by James Bell, Innovation Strategy Lead, who has led the work for the UK CAA’s Strategy for Artificial Intelligence. 

We hear from Florian Ostmann, Director of AI Governance and Regulatory Innovation with the Turing Institute about how AI has evolved in recent years and what the future holds. 

We also speak to Vicki Murdie from the Future Flight Team at Innovate UK to learn about potential new uses of AI across the aviation landscape.

Further information related to this episode

About the Future Flight Challenge

Launched in 2019 the Future Flight Challenge is a £125 million investment in the UK’s aviation industry, designed to deliver the third revolution in aviation.

Unmanned aerial vehicles (drones), advanced air mobility (AAM) and zero-carbon aircraft will come together to transform how we connect people, deliver goods and provide services.

The challenge is delivered by Innovate UK and the Economic and Social Research Council on behalf of UK Research and Innovation (UKRI).

Find out more at:

Transcript
Voiceover:

This is CAA on Air.

Nathan Lovett (CAA):

Hello and welcome to CAA On Air, the podcast from the UK Civil Aviation Authority covering innovation and future technologies. I'm Nathan Lovett from the CAA communications team, and in this episode, we're looking at the evolving and expanding role of artificial intelligence, or AI, throughout the aviation industry. Now, as we know, AI is already fundamental to many aspects of aviation. Some of the examples include enhancing safety and efficiency through predictive maintenance, aiding air traffic management and refining pilot training with advanced insights and simulations, but the future of AI is set to usher in a new era with artificial intelligence and increasing degrees of autonomy having the potential to impact every part of the aviation sector. You're going to hear from experts about the evolution of AI and also the challenges that come with it. We'll cover the use of AI in aviation, how it could be implemented in future and the potential benefits it could deliver. We're also going to discuss how the UK CAA is approaching the regulation of AI and autonomy. So, joining me as co-host for this episode is James Bell from the CAA innovation team. Thanks for doing this, James, can you start please by telling us about your role and the work that you're doing on this?

James Bell (CAA):

Hi. So, I'm James Bell, innovation strategy lead for the Civil Aviation Authority, and I have the pleasure of pulling together the CAA first strategy for AI strategy for artificial intelligence, and I'm really looking forward to exploring some of the features of that strategy and our thinking. In this podcast, we've got some great guests who are going to give us their insights and to help us along that journey.

Nathan Lovett (CAA):

Thanks, James. So, we wanted to lay some of the groundwork in terms of understanding some of the fundamentals of artificial intelligence. So where better to start than the Alan Turing Institute. We're joined now by Florian Ostmann, who leads a team at the Institute. Welcome Florian, please can you tell us about the work that you're doing there?

an Ostmann (Turing Institute):

Hello. Thank you very much for the invitation to participate in this podcast. My name is Florian Ostmann. I am the head of AI governance and regulatory innovation at the Alan Turing Institute. The Alan Turing Institute is the UK's National Institute for data science and AI, and the team that I lead within the institute, as the title suggests, is focused on AI governance and regulation that involves doing research, academic research, but also, in many cases, working quite closely with regulators to help think through the implications of AI in different regulatory remits, the risks associated with AI use cases, ways of responding to those risks from a regulatory perspective, yeah, more generally, thinking about strategies for achieving responsible development and responsible use of AI technologies in different areas. In addition to our work that's focused on regulation, we also do a lot of work on non-regulatory governance mechanisms, including the most prominent element at the role of standards in AI governance standards as a voluntary tool, and of course, has many connections in different contexts, to regulatory strategies. And as part of that work, we co lead the AI standards hub, which is a partnership between the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory, an initiative dedicated to especially international standards for AI, which have become increasingly important as a tool for AI governance around the world.

James Bell (CAA):

Oh, that's brilliant. Thank you, Florian, and it's really great to have you here with us. I guess we could start with the fact that AI has been a real hot topic, certainly last couple of years, I think most people have really tuned into AI and the introduction of open AI and chat GPT and Microsoft copilot and all of those pretty interesting news stories that we're seeing all over the place, really. So, I guess from your perspective, you guys must have been very busy the last couple of years. Could you give us a bit of an introduction to the kind of things you've been involved with and how that's evolved, and how you've seen AI evolve to where it is now?

n Ostmann (Turing Institute):

Yeah, yeah, absolutely. So, it's certainly been a busy period, so let's say the last 12 to 18 months, since sort of the public release of large language models such as Chat GPT, as you say. You know, those types of AI systems have really led to an explosion of public interest in AI, and have often and are currently dominating, you know, public discussions around AI in terms of the work that we've been involved in. Our work in many ways predates the arrival on the public scene, as it were, of generative AI. And so, I would maybe start with sort of a reflection on the history of AI as a field. And I think it's particularly interesting to engage in that reflection in a context of an industry such as the aviation sector, because it is a sector where we actually see that history play out. So, what often gets lost in sort of the current hype around the most recent AI innovations such as large language models and chat-based systems like Chat GPT, is that AI as a research field really has existed for many decades, and as an academic enterprise started in the 50s. That's when the term artificial intelligence was coined as a research concept, and what we've seen since then is a research program that was basically defined early on in the 50s and then a history of advances, and you might say disappointments, you know, along the way, where you've seen some significant progress in solving certain problems that define the field, but then also, you know, sort of a lack of progress, or disappointment in the pace of progress in solving other challenges. And the main game changer in, say, the last decade or so has been a shift in the paradigm in terms of approaching the search for solutions to problems in the AI space. And what we see historically, when the field started in the 50s was an approach that is often described as a rules-based approach, or an approach that relied on the idea of expert systems, where development of AI systems involved thinking about encapsulating human knowledge in the form of rules that were explicitly programmed, and then having a system operate based on those rules. They were pre-programmed rules, and they were sort of transparent to humans, because the humans created human developers created those rules. I think it's interesting to reflect on that in the aviation context, because if you think of traditional, long-standing systems such as autopilots and planes, you might think of those as falling into that category. And so, of course, interesting. It's been a common theme in the history of AI throughout that things that are once thought of as AI, then once they become commonplace, often you know, no longer referred to in that way. So, you know, autopilot in a plane. Of course, you might say it's been a long time since people thought of that as a form of artificial intelligence. So that was sort of the early paradigm, and what we've seen over the last decade or so is a shift to an approach that's commonly known as machine learning, which is an approach that relies on data as an input and involves developing systems that learn, rather than being explicitly programmed, how to perform a task, learn from data how to optimally perform a task, And that change in approach has led to significant breakthroughs in all kinds of domains. Early on, the impressive breakthrough that was made in the playing of games, for example, complex games such as goal and then, most recently, large language models such as Chat GPT, that's the kind of system that could never feasibly be developed through explicitly programmed rules. So that's only been possible through this machine learning based approach, the reliance on data and the training of models that's enabled breakthroughs, but it has, of course, also led to new challenges, because those systems that are developed in those ways are often less transparent, more difficult to understand, and as a result, may entail risks and challenges that are more difficult to manage compared to those more traditional rules based approaches to developing AI.

James Bell (CAA):

I mean, that's an incredible background, isn't it, to how AI has developed over the years. Would you say now that AI, in its latest evolution, is perhaps more aligned to the 50s aspiration of what artificial intelligence could be with the onset of machine learning.

n Ostmann (Turing Institute):

Yes, I would say so. In many ways, I would say in the sense that it has brought us a lot closer to solving some of the problems that define the field and the kinds of challenges. You know, every research field has its defined through a set of problems, right that the research is trying to solve, and in a sense, through the use of machine learning. You know, the advances enabled through that we've been able to solve a lot of problems or are on track of solving problems that seemed intractable using a rules-based approach. It's important to emphasize, though, that a lot of the theory that underpins machine learning, that is theory that was developed very early on, what has changed, really is the ability to put that into practice. So, the game changer has been a combination of availability of data that is needed to make that approach work, and then the availability of compute at scale to make that approach work.

Nathan Lovett (CAA):

And so, Florian, although Artificial intelligence has been around for many decades, this evolution into what could be called Modern AI, or advanced AI, presents new regulatory and technical challenges, doesn't it?

n Ostmann (Turing Institute):

Yes, so I think that's a really important point that comes with the change in approach and the types of AI systems that have been driving innovation in recent years. These challenges are essentially rooted in the characteristics that come with machine learning based approaches to developing AI, which is, for one thing, the reliance on data in training models and training systems, which means, in practice, that anything about the system depends on the data that is used to develop it. And so, in terms of quality management, terms of risks, there's a lot a lot of things that may go wrong if there are issues with the data that is used to develop the system. Prominent examples that, of course, have been quite salient and public discussions around AI recently are, you know, the issues around bias or fairness. You know, decision making systems that are based on machine learning, where, if the data that is used to develop a system is biased, you might easily get a system that makes biased recommendations or decisions. But fairness is just one aspect. You know, safety might be a more salient consideration in the aviation context, if the data, again, is sort of unreliable or has quality issues, you may end up with a system that has failure modes, for example, that you haven't anticipated, but are rooted in issues with the data. So, data related challenges are one important aspect. The other aspect, the main aspect, I would highlight, is related to the complexity of those systems from a computational perspective, and to some extent, the intelligibility, or in some cases, people use the term explainability, of those systems. In the extreme case, machine learning based systems may take the form of black boxes, where we're dealing with systems that are essentially inscrutable for a human, even for human developers, for developing the systems, and all you can observe is the relationship of inputs into the systems and the outputs. But there's a lack of transparency of how the system transforms inputs into outputs. And a result, again, there's a challenge around understanding how the system might fail, what the weaknesses of a system might be, and so on, and that's a big difference to, you know, a rules based system that's based on explicitly programmed rules, which, of course, you know, mean that you can deduce from the rules what possible failures of the system might look like.

James Bell (CAA):

I guess we've been familiar with the concept of a black box system for a number of years, but this brings about a whole new concept of a black box, in the sense that, as you described there, Florian, it's not only a black box because we can't interpret what's inside it, but to some degree, we can't even get our heads around this sort of thing, right?

n Ostmann (Turing Institute):

Yeah. I think you might say you know that this issue around transparency or extendibility is, to some extent a matter of degree, which reaches an extreme point. In the case of systems such as Chat GPT or large language model-based systems, where it's entirely impossible to understand the relationship between inputs and outputs from a human perspective.

Nathan Lovett (CAA):

And talking about humans, there's a lot of talk about this fear that AI in this latest evolution is going to start replacing people's jobs. What's your view on that Florian?

n Ostmann (Turing Institute):

I think it's a really important topic in the broader field of responsible deployment of AI. It's also a topic that lends itself to oversimplifications. So, it's actually a fairly complex question from a research perspective and a policy perspective to try to make predictions about the impact of AI on employment. One way to sort of illustrate that is by considering the fact that, depending on context, in many contexts, what AI systems will do if they're introduced in a particular context is they will automate certain tasks, rather than, you know, automating entire jobs, if that makes sense. And as a result, you know, there's a question, if, say, the time of employees is freed up as a result of certain tasks being taken off of them, what does happen with that time? Because modern consequence potential scenario is to simply reduce the quantity of human labor that's involved in production processes. But there may also be new uses and more creative uses for the time that is freed up. There's an often quoted example, that relates back to when cash machines were originally invented, where there's an interesting historical fact about that not necessarily have had the impact on the number of bank teller staff that you might have expected, because banks realized that there were other profitable ways of making use of the time that was freed up. So, it is a fairly complex question. It's also a fairly context dependent question. I think the last thing I would say is the importance, really, from a responsible AI perspective, to think about human oversight in the context of where AI is being used, and also thinking about AI and human machine healing in the AI context, as it were. So, we know that the benefits of AI are often greatest and the risks, that's the more important part, the risks will be most effectively addressed if, rather than replacing humans in a particular task context, you combine the deployment of AI systems with appropriate forms of human oversight, and also the human component in relation to the particular execution of the task. And it's really important to emphasize that component of the role of the human and the responsible use and deployment of AI, which then entails a perspective that also necessarily puts organizations in the frame of mind of you know, how can we substitute humans? But more so, what's the responsible way of deploying AI and the role of humans to make that possible?

James Bell (CAA):

So perhaps you could say, and it's not my quote, it's something I've taken from somewhere else, but AI has the potential to make humans more human.

n Ostmann (Turing Institute):

Potentially, you know, you might say, you know, there is actually, there are, of course, all kinds of tasks that are rather tedious, and you might think of as not necessarily, the kinds of tasks that involve a flourishing of the capabilities that make humans distinct compared to machines and executing certain tasks. And so, yeah, I think there is a potential of a future world of work where humans are free to focus on things that involve a wider reliance on the unique aspects of human capabilities, and leave those more tedious, simple aspects of work to machines.

Nathan Lovett (CAA):

We've talked a lot about challenges and risks, and rightly so. But in terms of positives, if you could pick one thing that you are most excited about in this new evolution of AI, what would that be?

n Ostmann (Turing Institute):

I think one aspect I would perhaps highlight is the potential of AI to make, and that's particularly relevant perhaps in the transport context, you know, to make our world safer and to help us solve challenges that have been in existence for a long time and have been really difficult to effectively resolve, especially if you think about challenges around optimal decision making in whatever context, whatever form that may take. To give a couple of examples, starting with the safety context, you might think the aviation industry, of course, is an industry characterized by really high levels of safety compared to other sectors and compared to other areas of transport, but it's quite clear that in the right constellation of deploying AI systems with appropriate human oversight, there is real potential for AI to make a difference and to help address especially those kinds of safety risks that result from areas where human agents have weaknesses due to the nature of human cognition and human approaches to tasks. So, safety is the all important aspect. And then we might think of the transport sector, aviation sector, as a sector where optimization challenges are quite prominent. If you think about the complexities of managing transport infrastructures and successfully solving an optimization task really involves thinking at a level of complexity that is very difficult for humans to do, and it also reaches limits in more traditional computing approaches, so especially in thinking about advanced forms of AI that are adaptive and that can learn from real time data. For example, I think there's a lot of potential for AI to enable improvements in the way we do things and in the way infrastructure is operated.

James Bell (CAA):

Well. Thank you so much, Florian, super to have your insights and really, really helpful to set us up the rest of this podcast.

n Ostmann (Turing Institute):

Thank you very much for the invitation. It's been a pleasure, and aviation is a really interesting context for reflecting on the role of AI.

Voiceover:

You're listening to CAA On Air,

Nathan Lovett (CAA):

Building on that insight there from Florian, it's now time to look at what this means for the aviation sector. And we're joined by Vicki Murdie from the Future Flight Challenge team at Innovate UK, which is part of UK Research and Innovation, also known as UKRI, so welcome, Vicki, please can you tell us about what UKRI does and about your role there?

Vicki Murdie (UKRI):

Hi. I'm Vicki Murdie. I'm an Innovation Lead in the Future Flight Challenge at UKRI. Future Flight Challenge is delivered by Innovate UK and the ESRC, which is the Economic and Social Research Council. So it's a multi-year, multi million pound government backed challenge looking at how to introduce new forms of flight into the UK so that we get the benefit of these new technologies for UK people and society, which is why we've got both social scientists and engineers involved in delivering this challenge as we want to look at both the technologies, but also how people use them and access them, and what people think about these new technologies alongside their development.

James Bell (CAA):

Well, thank you so much for joining us today, Vicki, with your view on research in the sector and the future flight challenge itself, and of course, your role as an Innovation Lead, what sort of uses of artificial intelligence are you seeing in the sector?

Vicki Murdie (UKRI):

It's an interesting area, actually. So, within future flight itself, we're a particular part of the aviation broader spectrum, but it's an area where there's a real push for adopting new technologies. So, the kinds of technologies we're seeing in other industries, we're seeing people starting to try and adopt those within the future flight space. But I think within aviation generally, we're going to see those technologies then go into wider and wider uses. So, whilst we see them in the future flight sector, starting now, we can see how they will grow and evolve in other areas of aviation. And generally, they're sort of falling into three main categories that we're starting to see so artificial intelligence usage around the actual aircraft platforms themselves, and then other aspects around aviation infrastructure, and finally, airspace, airspace usage and control.

Nathan Lovett (CAA):

Thanks, Vicki, so we've got applications across all aspects of the aviation sector really. Looking into the future. What do you think we'll see in terms of platform applications for AI?

Vicki Murdie (UKRI):

One of the most obvious ones that I see that people are starting to develop now, say, in future flight, we work quite a lot with what most people in the public will know as drones, remotely piloted aircraft systems, whatever you like to call them. I'm just going to say drones, because it's easier to just go with that right now. These drones don't have a physical pilot on board, it's an obvious area where you may try and use AI or heavily automated systems to control the flight of the drone and its interaction with other data and services that it needs in order to fly safely and where it's expected to fly. So that's something we're already seeing in some of the projects within the challenge that people are developing systems to bring together those different elements and use automation and potentially artificial intelligence to do that. But I don't think that's the only area that we're going to see these technologies used. There's the opportunity always to look at how you can use technologies to be more efficient in things like the training that you deliver for pilots using simulators. You know, we already use flight simulators a lot in training, but using them further to create different scenarios. Maybe, you know, further uses of simulation earlier on in people's training for all kinds of different types of aircraft, not just, you know, sitting in a big Boeing 737 simulator, as it currently is today, but also using these technologies, potentially in the flight deck as well. Crew efficiency, crew workload is always something that's going to be of consideration within aviation. And trying to make sure that we're using technology to enhance and assist that, potentially even in some places, getting to the point of, for certain aircraft types, maybe being able to reduce the piloting to one pilot on board, not two. I can't see that happening anytime particularly quickly with large passenger carrying aircraft. But you can see for some of the smaller aircraft that that could be something we could possibly look at of how that could be introduced. There's also, you know, there's a lot of work that goes on actually flying an aircraft when you've got very windy conditions, unstable approaches. So, it's possible that you could look at how those technologies could be used to enhance that element of the flight as well. We're already seeing in military aircraft today, the actual aircraft itself is inherently unstable in terms of all of its control systems. There's so much going on in a military aircraft that a human can't actually control it in real time. It is very reliant on its control system. You can see that technology possibility there and see, well, how can we transfer some of that thought and some of that control system behaviour into where it's challenging to fly aircraft currently today for non-military uses, and I think that's somewhere where you could see artificial intelligence being used potentially in the future.

Nathan Lovett (CAA):

I think what we're seeing is that there's a long journey in terms of automation over the last 50 years or so in the aviation sector, which has enabled things like autopilot. But now we're seeing this move into advanced AI and machine learning and what that can bring on top of more traditional automation. So, there's some really exciting applications there. Thank you. In terms of the infrastructure category that you mentioned earlier, we're certainly seeing lots of airports that we're engaging with using artificial intelligence and machine learning as a way to really realize value from the wealth of data that they gather, whether its passengers passing through the airport or delays or perhaps what's causing operational bottlenecks. There's so much data there that they're already capturing, and the machine learning side of AI is bringing all of that together into one place that helps them identify areas where they could improve efficiency. What are you seeing on the future flight side?

Vicki Murdie (UKRI):

We've seen through some of the work we're doing with the Connected Places Catapult, where they have accelerator programs, one of the areas that they look at there is working with airports. So, we are seeing that people, as you say, are looking at the fact that an airport is such a complex environment, so much is going on, so many moving pieces of the jigsaw puzzle. There's all the ground support equipment moving around to take people's bags to the right places to make sure that the right set of stairs is at the right aircraft. There's all of those vehicles moving around. There's all of the security side of things, of passengers wanting to flow as efficiently as possible through the building. And you know, the services that passengers actually receive in the airport themselves at check in and bag drops. There's so much going on in that environment. It's obvious, as you say, that with it being a data rich one, it's obvious that people will look at how you can use AI to improve that because, quite frankly, there's so much data, it's kind of hard for the human brain to probably assimilate all of that and make decisions, whereas computers can manage that, artificial intelligence can do that, and it can learn, and it can better predict maintenance on those vehicles. But you know, we're seeing one of those small projects that I was talking about where it's looking at using AI to help with security services. Now it always raises eyebrows when you say that in general public discussions, because obviously, with that, the one thing we've got to be careful of is that whenever we're using AI and it's around people's data or talking about anything that impacts on safety as well, we need to make sure that we're taking the technology on a journey of working with people and developing safe systems and people understanding how they're safe. So, as I say, we do see that there's uses being developed for things like artificial intelligence in security at airports, because it can make the passenger journey so much easier and quicker and make it more efficient for the staff there as well. It's always going to be a challenging area when you talk about that with the wider public and being transparent about the fact that you're using these technologies to make people's lives quicker and easier, but that you're doing it in a trusted way is, I think, one of the areas that's going to need to develop further there.

James Bell (CAA):

Great. Thank you. Vicki, that's really interesting. So, we've had a little bit of a look at aircraft platform applications and then the infrastructure that you mentioned just now, the applications around that. The third area was around airspace applications. Is there anything that you can tell us about the work that's going on in this area, particularly how AI might start being used in that space?

Vicki Murdie (UKRI):

Yeah, so working within the Future Flight Challenge, it's obvious that we are seeing new forms of aviation that will be entering into the airspace, and there's always challenges in doing that, as we already have people using the airspace today, and if you're going to suddenly have a lot more vehicles using the airspace in the future, you need to think about things differently. So, AI has a huge opportunity there to sit alongside the systems that we have today, and to modernize those systems so that we can allow aircraft of all different types equal access into the airspace, but in efficient ways. So, we have, you know, within the drone side of things, we have traffic management systems, often referred to as unified traffic management systems. UTM, that's looking at if you've got lots of drones flying around the same area, how do you actually schedule all of that? How do you make sure that they're following the most efficient flight path and staying separated from other aircraft? So UTM systems are being developed at the moment, and that's for that sort of type of aircraft where they're uncrewed but you can see that the sorts of technologies that are being developed there could then start to sit alongside and adapt with crewed aircraft traffic management, so traditional airspace traffic management systems. So, I think really, we're going to start to see those technologies. You know, there's a massive opportunity there for ATM systems to modernize and develop and to use AI. If you imagine the disruption it causes when you have an event that happens somewhere, that means that you've got to move aircraft out of an airspace. So if there's suddenly a major thunderstorm over an airport somewhere, you've got a lot of different flights potentially to divert, and lots of complex data, you've maybe got the opportunity to use AI to enhance that decision making process so that you can keep more aircraft flight and not have to cancel flights, because you can reroute things and move things around in a more efficient way than we can perhaps do today, because it's quite labor intensive doing that process. You can see those sorts of opportunities in the future when these systems are developed and shown to be trustworthy, but also things like, you know, we in aviation, we're always at the mercy of the weather. And let's face it, in the UK, we have some interesting and varied weather, so being able to precisely understand weather data and understanding what's happening now, what's going to happen, and looking at those patterns and better understanding the impact that may have on aviation, you can see, you know, again, it's another really data rich environment, where AI has the potential to leave us much better informed and better able to make decisions.

Nathan Lovett (CAA):

There's definitely a theme emerging across those three categories highlighted there around, let's say, data richness and the ability of these artificial intelligence systems to learn from that data and extract insights that experts across the whole sector can use to make informed decisions.

Vicki Murdie (UKRI):

Yes, I think you know that's very much key. Any AI system is only going to be as good as what you're training it on in the first place. So you need data rich environments, often to actually fully train something, fully train an AI system. And the fact that, you know, there is already so much data out there that just gives us the ideal opportunity to train systems well and use that to extract greater understanding from that data than we can through human processes today.

James Bell (CAA):

And we know that there's lots of data that exists already, and I've heard a lot that it's really to make sure that the data going into AI system is fair and unbiased. That's one of the many challenges, to make sure that the data that stems from perhaps decades of aviation or even just the last few years may not have been intended for this process. So I guess there's some cleaning up to do, perhaps on that data?

Vicki Murdie (UKRI):

I think that's a really good point, because, yeah, you can very easily introduce an unintended bias from which data sets you feed something. You might think, oh, this is broad, and this is representative, but actually there is a challenge in looking and checking that the data genuinely is and doesn't introduce an unintended consequence or an unintended bias. But I think that's why looking at how things learn and what decisions are coming out with, and keeping an eye out for any bias that creeps in, and if you do start to see things being able to then go, actually, okay, it obviously needs additional information from somewhere else in order to avoid a bias that's developing, that's going to be very important as we go forward.

Nathan Lovett (CAA):

Vicki, what do you see as the broadest challenge relating to artificial intelligence?

Vicki Murdie (UKRI):

So, I think the biggest challenge in introducing AI technology into aviation is thinking about the human element. So aviation is a heavily regulated industry because, for good reason, we want to keep it safe. It's got a good safety history, and as we introduce new technologies into aviation, they always have to be demonstrated to be safe. But I think when you start talking about technology like AI, it's not just, does the regulator think it's safe, it's also, does the public think it's safe? Do people understand what's being used, how it's being used, and how that is safe, because people will have a range of opinions on this. AI gets talked about so much in the media, and it's very easy for us to feel that it's going to be everywhere and that it could be a very negative thing, when the reality is we're trying to introduce it for positive reasons. So I think we need to make sure that as the technologies are being developed. We do that hand in hand with talking with public groups to understand their thoughts, their concerns, and how we can address those and what we need to learn going forward. This isn't really a challenge just for aviation, because it's the same with all AI technologies being introduced into different areas as well. But I think with aviation, people have that particular desire to really understand the safety side and to know and feel reassured that what's happening is safe and continues to be so.

James Bell (CAA):

I guess you could reflect on the introduction of digital technologies in a very similar way to sort of more physical technologies, like thinking about the concept of a social licence. If there were introducing a new physical thing that was going to be in people's public spaces, for example, around their neighborhoods, you'd want to engage with those people and that public really effectively and regularly to make sure that they understand it. People need to trust that a system is going to fit in with their lives and not interrupt too much. I guess it's very similar with a digital technology like AI. What do you think?

Vicki Murdie (UKRI):

It is but I think you have an added challenge as well, because where something is physical infrastructure or something physical, people can see it, and they can get to trust that through seeing it. But when we talk about AI, it's things that are behind the scenes. It's those digital technologies that we can't see. So I think that there's that extra layer then of needing to understand, because it's not tangibly in front of people to be able to see and feel. It's all data. It's all digital. It's all, you know, computers, and therefore, it just has that extra layer, I think, of, you know, a need for clarity for people.

Voiceover:

Bringing you the latest updates from our Innovation team, this is CAA On Air.

Nathan Lovett (CAA):

So, we're going to look at the work that the UK CAA is doing around AI. But before we get started on that, James, what do we actually mean by artificial intelligence? How should we define that term?

James Bell (CAA):

It's an age-old question. Nathan, so the term artificial intelligence was coined back in the 1950s and I think, as Florian has said, back then, it was a research area, and we're getting into that stage now where artificial intelligence is really coming into its own with, we might say real AI, and we've been talking about some of those already. But if we look back on the aviation sector for the last few decades, really, we've seen lots of automation using very logical technology, so things like autopilot and automated engine controls, flight management systems, those kinds of examples. You know, if we think about when you step into a Concorde cockpit, the first good few feet of the cockpit is gauges and switches and dials before you even get to the seats, where the pilots fly the aircraft. Whereas a modern aircraft, you jump into the cockpit and there's screens and there's digital computers, etc., that have taken all of that, one of those systems, all of those controls, and automated into using modern technology. So, what we're seeing now, as Florian has really described, is this move into call it real AI with machine learning, where we're getting completely new ways of automating tasks that were too complex to automate with traditional automation. So then, what do we, as the regulator, really mean by AI in this context? Well, we talked about automation. So automation is the application of technology to perform human tasks and operations in a way that reduces the need for a human to do things to intervene. If there's trust in that automation, we can achieve autonomy. So that's where system is in such a way designed that a human doesn't need to control. It doesn't even need to oversee the system. So if the human, if you like, can step back from the system, then we achieve a level of autonomy. So we've got automation. Autonomy is two really key terms in this discussion, and then artificial intelligence sort of sits alongside that as just another technology that can be used in order to achieve automation, and therefore to give us a higher level of autonomy. And that's a really important distinction to make between those three terms. We do see some organizations using them in a slightly different way, or interpreting those three terms in different way, and perhaps talking about artificial intelligence as being the pinnacle of technological development, over and above automation and autonomy in some kind of capability scale. But actually we see those in the different way that they're three interrelated terms, and it's just really important to recognize that, and we've published some guidance on this. So we can go to the CAA's website, and we can look up CAP 2-9-6-6, which is an introduction to the kind of terminology around AI, and it's hopefully set out in a really easily accessible format. It gives a good introduction to this topic.

Nathan Lovett (CAA):

So we've heard from Vicki about some of the areas where AI is likely to be used across the aviation landscape. But what does that mean for us as a regulator and for you in your work? What are the challenges here in terms of regulation?

James Bell (CAA):

That's a really interesting one to delve into. We're not alone as an aviation regulator in terms of those challenges that we see from artificial intelligence. We're quite thankful of that, because we can share some of the learning with other sectors and internationally. And with guidance from the office for AI in the Department for Science, Innovation, Technology, they've helpfully dissected those regulatory challenges into two key topics. The first one is autonomy. We've already talked about that a little bit with the definition of autonomy, but essentially, that's the removal of humans from an aviation system or operation. And of course, if we're removing humans from what has typically been designed over decades, you know, systems that rely on humans for control or oversight, or both. You know, we've really got to think about, well, if we've got an autonomous system, who's accountable and who has the authority and who's the responsibility for that system, and if something goes wrong, perhaps, and what's the operator's role, how does that change? And how do we even certify as a regulator? How do we certify those systems as being safe and secure? There are some questions that we have to ask ourselves, and as I say, thankfully, they're not questions isolated to the aviation sector, but certainly, how we deal with those questions and how we go to answer them will be specific to the sector to some degree. And then we kind of move on to the second category of challenge, if you like, associated with artificial intelligence, that's system adaptivity. So it's the ability of the system, machine learning systems, mostly, to adapt their own logic based on what they learn from their environment. And Florian talked a little bit about this as well. And in that case, we've got systems that maybe on day one we might certify as safe and secure and to a certain standard. But over time in service being operated, it's learning from what it observes and from what it experiences, if you like, if you could say that, and it changes its logic and changes the way that it reaches an outcome. So by day one hundred after being used, how can we be sure that it's still safe and that logic is still aligned to requirements of the regulation? How do we make sure that adaptable systems stay safe and secure? And does the oversight role of the CAA need to change and adapt? What impact does it have on things like flight crew training and operator responsibilities and even just the CAA's oversight. There's lots to consider there, and what we're trying to do is pull those regulatory challenges, map them across all of our regulatory responsibilities, so that we understand the different impacts across the sector, to be able to prioritise where we start to really research and investigate how best to respond to those.

Nathan Lovett (CAA):

So to help us understand a little bit more about the CAA response to the challenges of AI, we're joined by Ed Clay from the CAA's Future Safety and Innovation area. Ed, please, can you tell us about your role?

Ed Clay (CAA):

Yes, so I'm Head of Technical Strategy within FS, and I'm responsible for making sure that we have a strategy for each new technology and how we the CAA are going to change and build our capability to safely and effectively regulate new technology.

James Bell (CAA):

Fantastic, very important role. It's great to have you here to help us along this journey in the podcast. So what is the CAA's approach for enabling innovation to flourish?

Ed Clay (CAA):

So I think the first point to make is is we're a regulator, so we don't, obviously do the innovation. That's not our job, but we are an important part of the ecosystem. I think the most important thing we can do is give industry and those doing the innovation, give them visibility of what the regulation is going to be, because that allows them to know what the hurdle is and where they need to get to. That's really important in seeking investment, business, planning, etc. So our approach is to give industry as early visibility as as we can, and to make sure that we're, through things like this AI strategy, we're publishing our thinking and then keeping industry up to date, letting them know what is coming and what we're going to expect.

James Bell (CAA):

And I guess then it also helps us think about that internally as an organisation?

Ed Clay (CAA):

Exactly, yes. So I think it's particularly with very broad technologies like AI that will impact a huge part of the CAA, probably every area, it's really important that we have that common view across the CAA about how we're going to approach it.

Nathan Lovett (CAA):

So Ed, thinking about AI more specifically, what impact would you expect this to have on the CAA's regulatory responsibilities?

Ed Clay (CAA):

I think it's going to vary a lot from application or use case to use case. So AI is going to be used in all sorts of different things, across aviation, and the impact on the CAA or the CAA's response to AI is going to be different depending on which area we're talking about. So the way we assess that is, when we identify a potential use for a new technology, we assess it against a framework that's based on what's called the ICAO eight critical elements. So that's ICAO description of the eight areas where they expect a regulator to have capability. And those eight areas, are what we get assessed at what we are audited against by ICAO. So for each new technology, we look across those eight critical areas. So for example, the first one is around legislation, and we'll say, okay, does today's legislation allow us to regulate this technology or enable us to regulate this technology safely? Sometimes the answer is yes and sometimes the answer is no, and there is going to need to be a change to the legislation. And then we go through all the other areas as well. So we consider our policy, our acceptable means of compliance, our skills within the organization, etc. So to come back to AI, there are some areas where we're going to need new legislation. We started that process by working with the Law Commission to look at autonomy and how that's going to impact primary legislation. There are other areas where actually the current legislation will work, and then it's going to be about, well, do we as the CAA have the right skills? Are we doing oversight in the right way? Do we have all the acceptable means of compliance, etc, that we need to safely regulate the sector?

James Bell (CAA):

That's a really good framework to use, especially because we were audited as a whole organization not too long ago against those eight critical elements by ICAO. So I think to be able to align our AI approach to what was a sort of whole organizational capability framework, it really helps us tie up the specific elements of AI into those elements that contribute towards us being a competent authority.

Ed Clay (CAA):

Yeah, it's something we've seen with other technologies. AI is particularly going to be the case with this, but it touches all parts of what the CAA do. When you consider something new, it's very easy to forget something and not think about how is this actually going to work when this is a sustainable industry operating at volume. So by having that check, and that framework we found is a really useful way of making sure we're thinking about all the implications of what the technology will be.

Nathan Lovett (CAA):

And what do you see as the benefits of providing that high level, strategic approach for regulation of AI?

Ed Clay (CAA):

So I think it comes back to what we talked about a moment ago, with communicating with industry. So the strategy that we're publishing is our first step on, what are we going to do as the CAA to regulate AI. Like I said, we'll be making more decisions, I mean more information we're putting out as we consider specific technologies. But this is the first step to give industry our thinking, and also it's really important internally, it gives us that common view, that north star of how we're going to approach this, and something we can come back to as we get new questions and new challenges in the future.

James Bell (CAA):

And I guess this is, after all, just one big change program, introducing AI and communication is key, right? So having something that people can look to as a vision of the future is really important.

Ed Clay (CAA):

Yeah that's very true. I think we're seeing AI is a big part of it. But with the whole changes in aviation, new technologies, I think some people would say the whole thing's a big change programme.

James Bell (CAA):

And I guess then providing that high level framework also gives assurance for the likes of investors into the sector as well, because this is a highly regulated sector, so encouraging inward investment into the UK, for example, being able to be public with that information could surely help.

Ed Clay (CAA):

Yes definitely, I've spent time working in industry, and certainly when you're going to look for investment, uncertainty is one thing that everyone is looking to minimize. Obviously, the best answer is we can do this now. Clearly, that's not always going to be the case. So then the next best thing is, well, here is the plan. This is when we will be able to do that. I think everyone recognizes that these are difficult problems. Plans will change, but that makes it more important to communicate and be open with industry about what we're planning and where we are.

Voiceover:

Stay up to date with skywise from the CAA by visiting skywise.caa.co.uk.

Nathan Lovett (CAA):

Now the UK government is asking regulators to take what's called a pro innovation approach to this challenge of artificial intelligence. James, please, can you talk us through what that means in practice and how regulators are responding to this?

James Bell (CAA):

Sure. Yeah. So the Department for Science Innovation and Technology previously had the office for AI and a couple of years ago, published their thinking on what would be a principles based approach to regulating AI in such a way that, rather than prescribing certain rules, enables all of the sector regulators across the UK in different parts, from aviation to nuclear to environmental and all sorts of others, in order for them to use their own specialist expertise to understand how AI impacts their sectors, and then to use a common principles based framework to determine the best way to then implement those principles such that they apply best to that sector. And the pro innovation element there is to enable the flexible nature required for innovation to happen. You know, AI is a rapidly evolving area of technology and will impact the different sectors across the UK in different ways, but there'll be also common elements that we need to consider. So by providing a common framework, the government is allowing regulators to come together, share that learning against a common framework, but then also the flexibility to be able to apply those in sort of specific sectoral fashion, if you like. And we're already seeing regulators coming together to share knowledge, think, really keen to Horizon scan in this area and to understand what AI is doing next. We've seen loads of development, as we've already spoken about throughout the podcast, in terms of how AI has got to where it is, and it's been very rapid in the last eighteen months to two years, especially with the evolution of generative AI models like Chat GPT and generative media. The impact that that has had on the public perception has driven regulators to really think about, okay, what does this mean for our sector and for the public and for those who are impacted by the use of AI in the sector? And there's a real need to generate trust. There's misinformation, there's risks around AI that clearly come up to the front of people's minds when they think of this stuff. We've discussed how trust is absolutely fundamental to making this work, and the pro innovation approach that the Department for Science and Innovation Technology has driven is there to generate that trust.

Nathan Lovett (CAA):

How does this apply to us here at the UK CAA? what can you tell us about how that approach, directed by UK Government is influencing your work, and what that means as you take on this challenge here around integrating new uses of artificial intelligence into the UK aviation system?

James Bell (CAA):

We're quite lucky as an aviation regulator, because we've had a principles-based approach for aviation safety and security for many years. So, in a sense, as a regulator and as an industry that we regulate, the mindset of following outcomes focused regulation is already there, and we found this speaking to other sectors where perhaps that mindset doesn't exist already in their sectors, there's a learning journey for them to go on to get to that stage. But as I say, you know, within aviation, we're able to introduce these principles, and they sit alongside existing outcome focused approaches quite nicely. So there are five principles that are fundamental to building trust in AI, and these principles are aligned to the UK government's pro innovation approach to regulating AI, and the CAA will follow and support the government's approach to continuously reviewing and learning how these principles support or hinder AI regulation in aviation. And we'll be providing feedback up to the government to help with this. So clearly, the first principle, safety, security and robustness, absolutely fundamental. We want to make sure that there's no harm to people, things or environment. And we can think about an example in this one, sort of like a medical diagnosis tool, you know, an AI that analyzes perhaps the scans, medical scans, shouldn't really lead to a wrong treatment, and that's the kind of safety element, but then also should be protected from data breaches to reveal patient information. That's perhaps the security element. And then finally, should remain reliable even with imperfect scans. So that's perhaps the robustness piece. So you can see there, there's some really key parts that we need to draw in to make sure that AI is being used in a way such that it is safe, secure, robust. And then we also kind of need to make sure that people are aware of where AI is being used, because it has some real implications on the outcomes for people. So transparency and explainability is the next principle that it's, again, really, really fundamental in understanding how AI works and why it decides what it decides. So an example here is perhaps a loan application. So you might understand that you go to get a new loan from a bank, you need to understand why that loan was approved or denied, not just to get a generic rejected or approved message, but actually to understand what contributed towards that decision, and then that reason could be explained as perhaps insufficient income or negative credit history or something. That's the explainability element. But there's also a part there that organizations using AI need to be able to explain where and why AI is being used, so that we've got that again, transparency element. The next one is fairness and avoidance of bias. So you want to make sure that there's no unfair treatment based on who you are, and that AI is free from bias. So we've understood, you know, machine learning. It learns. So it has to learn from something, and we call that something training data. Training data normally needs to be quite substantial. It needs to be thousands and thousands of records that are fed into an AI system such that it can learn from those and adapt its logic as we've talked about before, in order to provide an outcome. We think about aviation has existed for decades. We are applying AI into aviation. There's loads of data there that it can draw upon, but what if that data includes some sort of bias or inaccuracy? We need to make sure that that training data does not result in some sort of unfair outcome for the consumers. The next principle is accountability and governance. So we need to make sure that somebody is always responsible for AI's actions. So let's take an example of drone delivery, and unfortunately, let's say it crashes somewhere, or it has some sort of failure. Well, if a drone delivering your package crashes, then you should know who to hold to account for that, then the company who's operating that drone should have a clear oversight of its AI systems so that it can understand what went wrong and identify areas for improvement. But then, let's say all of those principles fail actually, and then something does go wrong, or a decision is taken by an AI system that the user or the consumer is not happy with. So the last principle, then, is contestability and redress, and this is the ability to challenge unfair AI decisions and to get help if someone or something is harmed in some way. So another example here is perhaps an automated parking ticket. So let's say you're parking in a city and you find yourself getting a ticket in the post on your email a few weeks later, and actually you believe that you shouldn't have got that ticket, so you should be able to appeal that that was issued by an AI system if you believe it was wrongly issued. That's the contestability element. And then let's say something unfortunately happened, you should have the opportunity to explain your case and potentially to get that ticket overturned, and that's the redress. So you can see, hopefully there, there's five principles which cover a whole range of issues. And if we can tick those boxes, if an organization looking to implement AI can check those five boxes that starts on the journey towards trust. They're not saying it's going to achieve it, but it starts us on that journey. So it's really fundamental that we consider the five principles, both from the implementation of AI and the regulation of AI.

Nathan Lovett (CAA):

Thanks, James. So how can people find out more about this? You just mentioned the principles there, where can people go to find more detail on those and the work that you're doing?

James Bell (CAA):

So we published a document in February this year called Building Trust in AI, and it introduces the five principles for AI and autonomy, and that's CAP 2-9-7-0, which is available on our website. And that really introduces the concept of trust and how fundamental it is to the success of using AI and the introduction of AI in the aviation sector. It introduces those five principles and gives some good examples in a bit more detail, but then also applies those five principles to four case studies, and they're illustrative case studies, just to really tease out some of the potential application of those. So thinking about the CAA as a regulator for aviation safety, security and consumer protection, it drives down to say, Okay, well, if we were to apply, let's say, the safety, security and robustness principle to the detect and avoid system for a drone or for an automated air traffic management system. So there's a couple of examples there which are specific to the application of AI in the sector, by the aviation sector. But if we're going to start looking at potentially requiring organizations to prove how they are meeting these five principles when they're introducing AI. Well, if we're using AI ourselves as a regulator to perhaps carry out some regulatory functions ourselves, we should really be applying those to ourselves as well. And it also describes where we got these principles from and what the significance is in terms of our overall approach to regulating AI into the future.

Nathan Lovett (CAA):

Great, thank you. And what's coming up next for your work in terms of milestones?

James Bell (CAA):

So we've just published the strategy so you can find that on the CAA's website. If you go to caa.co.uk, forward slash innovation, and there's a section there on artificial intelligence, and that's going to be our central hub for information going forward. So, any information, any documentation, head straight there. But towards the end of the year, what we're looking to achieve is off the back of the strategy being published that really sets that north star, as Ed described, that helps us determine what we need to do as an organization to set ourselves up to start delivering that. So by the end of the year, I'm hoping to have the business planning in place such that the CAA is in a position in subsequent years to really invest the resources we need to start delivering on identifying the regulatory challenges in more detail, and really start engaging with the sector to make sure that we're heading in the right direction, prioritizing the right areas and to provide a bit more of. Thought Leadership go forward.

Nathan Lovett (CAA):

Thanks again to Florian Ostmann at the Turing Institute and Vicki Murdie from UKRI for speaking with us, and of course, to James Bell and Ed Clay from the UK CAA. You'll find links in the episode notes to the resources that James mentioned, including the UK CAA Strategy for Artificial Intelligence. That's it for this episode. But if you have any questions about AI or suggestions for things that you'd like us to cover in future episodes, please get in touch by emailing onair@caa.co.uk Thanks for listening.

Voiceover:

Thanks for listening. This is CAA On Air.

Listen for free

Show artwork for CAA On Air

About the Podcast

CAA On Air
On Air brings you updates from the UK CAA's Innovation team which works with organisations to support new aviation related products and services.

You'll hear about the latest work in areas such as advanced air mobility, UAS traffic management (UTM) and operating beyond visual line of sight (BVLOS) along with interviews from the experts involved in these projects.