¶¶Òõ¶ÌÊÓƵ

Engineering the Future of Artificial Intelligence

About the episode

AI-powered programs like ChatGPT can do everything from plan an itinerary for your next holiday or help you write a resume or business proposal. But with all great technologies, there are risks and challenges.

AI expert and Chief Scientist at UNSW's AI Institute, Professor Toby Walsh, and Director of the National Artificial Intelligence Centre hosted by CSIRO's Data61,ÌýStela Solar, join STEM journalist, Neil Martin, to take a deep dive into the impact of AI on society in areas including healthcare, transport, education, and defence.

Ìý

Stela Solar

Stela Solar is the Director of Australia’s National AI Centre hosted by CSIRO, with the mission to accelerate positive AI adoption and innovation that benefits business and community.

She has previously held roles at Microsoft including Global Director of AI Solution Sales, as well as roles across IT distributors and technology startups. Over the past 15 years, Stela has led business model transformations which enable organisations to capture new revenue opportunities and build competitive advantage with emerging technologies such as Cloud, IoT and AI. With a background in sustainability and creative industries, Stela thrives in ambiguity, change and purpose-led organisations.

Outside of the technology arena, Stela enjoys the challenge of competitive sailing and tries to find time to play cello and piano.

Ìý

Professor Toby Walsh

Toby Walsh is an ARC Laureate Fellow and Scientia Professor of AI at UNSW and CSIRO Data61. He is Chief Scientist of UNSW.AI, UNSW's new AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN, and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being "banned indefinitely" from Russia. He is a Fellow of the Australia Academy of Science, and was named on the international "Who's Who in AI" list of influencers. He has written three books on AI for a general audience, the most recent is "Machines Behaving Badly: the morality of AI".

  • Voiceover:

    Welcome to UNSW Engineering the Future podcast, a series where we'll speak to academics and industry leaders who are embracing cutting edge ideas and pushing the boundaries of what is truly possible. In this episode, we'll take a deep dive into exciting developments regarding artificial intelligence, and discuss what impacts we can expect on society as a whole. Over the next two decades, we'll hear from leading experts in the field Professor Toby Walsh and Stela Solar, as they explain why we should embrace and not fear AI replacing humans when it comes to doing jobs that are dull, dirty, dangerous or difficult. And they will also reveal how the use of artificial intelligence in the field of medicine is likely to help us enjoy longer, healthier and happier lives. So join us as we discover how world changing action starts with fearless thinking in Engineering the Future of AI.

    Neil Martin:

    Hello and welcome to Engineering the Future of AI. My name is Neil Martin, and I'm a journalist and STEM communicator, working in the Faculty of Engineering at UNSW. Joining me today to discuss the amazing impact we can expect with the ever-increasing utilisation of artificial intelligence across society is Professor Toby Walsh, Chief Scientist at UNSW's AI Institute, who has previously been named on the international who's who in AI list of influencers. Toby is a strong advocate for limits to ensure AI is used to improve our lives and not cause harm, especially with regards to use in warfare, leading Russia to recently ban him indefinitely from entering their country. Welcome, Toby.

    Toby Walsh:

    Thank you, Neil.

    Neil Martin:

    Also with us is Stela Solar, who is the director of the National Artificial Intelligence Centre hosted by CSIRO's Data61. With more than 15 years experience in the field, her role is to accelerate positive AI adoption and innovation that benefits both businesses and communities. Hello, Stela.

    Stela Solar:

    Hi, great to be here.

    Neil Martin:

    So from my point of view, artificial intelligence has really burst into the public conscience over the last 12 to 18 months, although I believe it has been recognised as a field of research since as early as the 1950s. Obviously, the recent release and subsequent impact of ChatGPT has really sparked a lot of discussion about the way AI will potentially change education and workplaces and healthcare and transport and warfare and so on.

    Toby, I know you wrote a book around about five years ago, I believe, titled ‘2062: The World that AI Made’, which projected about 40 years into the future to the date that machines could potentially be as intelligent as humans. Given the growth of AI in such a fast period of time, are you tempted to reconsider that that might happen much sooner and do you think AI is going to be even more impactful than you imagined?

    Toby Walsh:

    I'd certainly be tempted to say it might come a bit quicker. I should say that 2062 was I surveyed 300 of my colleagues, other experts around the world as to when machines would fully match humans in all of their abilities, and that was the average answer they gave but there was a huge variability in their answers. It's something that's going to happen almost certainly in the lifetime of our children and for most of us perhaps if we're young enough in our lifetimes. And so that is going to be pretty profound.

    There's still a lot we got to do. ChatGPT, for all its splendour and wonder, is still remarkably stupid. I actually have an example where it fails to count up to two properly. There's lots of things that it fails at spectacularly. Equally, there's lots of things that it does that still amaze us. It can write computer code. It can write sonnets in the style of Shakespeare. It's amazing what it can do, but equally, it's also amazing how stupid and dumb it can be at times. So there's still a lot we've got to do.

    But what's interesting, I think why it's exciting today is that even if we stop making any more technical progress on the field and we just rolled out what we could do with the current technology to more of the businesses that could use it, there's still a huge amount that we could do. It's going to still have a huge impact on our lives. So it is going to be transformational. I think it is rightly being called the next industrial revolution. It is going to be as groundbreaking as the invention of the steam engine, as transformational as the electrification of our lives. It's hard to imagine a part of our lives it's not going to touch.

    Neil Martin:

    Have you been slightly surprised, Stela, by how it's really burst into public consciousness over the last 12 months?

    Stela Solar:

    Even six months. I think it's been spectacular just how many more people around the world have been able to get their hands on some of the generative AI tools to see what could be done, and I think they're seeing some real positive impacts and some real questions that we need to address. Obviously, generative AI is just one small area of AI, but what I really see intriguing is in the same way that AI technology is evolving and shaping, so too is our society. We are evolving. We're asking ourselves what kind of skills, what kind of experiences we want to be developing amongst ourselves? What do we want to bring to our students, our children today? What kind of capabilities do we really want to nurture so that they're able to thrive in these environments? And so there is a holistic evolution across this entire ecosystem that includes technological evolution just as much as it is including societal evolution as well.

    Toby Walsh:

    Stela's answer actually hinted at something I think really important to understand why it is having such an impact on our lives, which is the speed of adoption. That it was no surprise that ChatGPT was the fastest growing app ever to be in the hands of a million people after the first week, a hundred million people at the end of the first month. And now because it's been incorporated in Bing, in Snapchat, it's in the hands of over a billion people. We've never had technology before that could be put into the hands of so many people so quickly.

    Stela Solar:

    And it really talks to this user interface, the ease with which users can engage with AI technology. That's one of the fundamental things that I've seen accelerate this current wave.

    Neil Martin:

    And I guess that also applies not just to the public but to businesses and to industries and to educators and to healthcare providers. So we might try to cover some of those topics and to discuss how these tools are potentially going to be used and what we might expect 20 years down the track. I think you've spoken about AI being utilised for all the D's. I think it's anything that's dull, dirty, dangerous, difficult. If I'm thinking of jobs, isn't that most jobs?

    Toby Walsh:

    Sadly, maybe you're correct, and hopefully we can change that. If someone says to me, "Oh, have you seen this latest example of some use of AI to automate some, typically not a whole job, but some aspect of someone's job?" I normally say, "Well, wait a second. We should celebrate that fact. We should be celebrating that humans no longer have to do those dull, repetitive things anymore." Indeed, we should never have got humans to do those in the first place. And now that humans are liberated, they can focus on the things that are actually rewarding to humans, the things that bring us value and pleasure.

    Stela Solar:

    And to expand the dull, dirty, dangerous, the D structure, what's fascinating to observe over the recent time is how AI has this potential to support us with our own translation of ideas and thoughts into the real world. You can conceptualise the most out there image you can and now you very quickly put it into an AI tool and you could potentially get an out there image generated within a couple of seconds. And this isn't just done in these creative, fun use cases. We're seeing this start to be adopted in engineering and design where you could put in a request for what kind of design you would like to see and the AI tool could create that output. And so it's almost enabling us to bring to life our thoughts, our ideas much faster, potentially being a great tool for prototyping and rapid development.

    Toby Walsh:

    I hope one day that artificial intelligence stops being called artificial intelligence. AI becomes augmented intelligence. It's a tool that can, like all the other tools that we've invented, that can actually extend what we can do so that I may not have very good draughtsmanship, but now I've got a tool that allows me to draw much better than I ever could. I'm not very good at music, but now I've got tools that can allow me to compose music.

    Neil Martin:

    Do you think the debate around people losing their jobs from AI is a valid one?

    Toby Walsh:

    I think we have to have this important conversation. There will be for sure some people whose jobs get replaced, get automated, but probably it's hard to know, but lots of jobs will also be created. We don't know whether more jobs will be created than destroyed. The latest survey by the World Economic Forum suggests that 85 million jobs are going to be destroyed by artificial intelligence and 97 million created, so that will be a net positive. It's hard to know. There are so many uncertainties.

    I think what is a hundred percent certain is that whatever new jobs get created will typically have different skills than the jobs that got destroyed. And so at the very least, we're going to have to think carefully about how do we support people through that transition so that they have appropriate skills for those new jobs and how are we educating young people today with the right skills for those new jobs.

    Stela Solar:

    And the workforce, the job requirements, roles, they are rapidly transforming and that rate of change is potentially speeding up than what it has before, and so very deliberate effort into continuous education and enabling different career paths becomes much more important. It's not something new for our society to consider. I recently found out that there used to be human alarm clocks - people who would walk down streets in cities and towns, throwing little pebbles at windows to make sure that people would wake up on time. And obviously, we still have the need to wake up on time. It's just we're approaching that differently. There isn't a person throwing rocks at our windows necessarily anymore.

    And so there is a reshaping of our roles. There are new roles that will come forward. We don't fully know what those roles may be, but this continuous learning becomes critical and also proactively enabling people today to take advantage of these new opportunities is important. CSIRO did some research and found that 161,000 new AI professionals are needed in Australia alone by 2030. And so those are new roles that are available that we really need to fill that we currently don't have a big enough pool of skilled individuals to fill.

    Neil Martin:

    You mentioned there learning and having to adapt and, interestingly, AI can play a role in that improved education. I've been doing some research and there seems to be some talk about AI being used in personalised education, personalised learning. How do you see maybe a classroom of children in 20 years being changed through the use of AI in schools, say for example?

    Toby Walsh:

    I think on the surface, it's going to look actually quite similar to today. There's still going to be a teacher at the front of the class because teaching is, and this is true for so many other jobs, it's all about the personal interactions, the social intelligence, the emotional intelligence, understanding the child in front of you, understanding what it is. And machines don't have emotional and social intelligence, and so there's large parts of teaching that are not going to be replaced. But equally, there's lots of things that teachers do that we could use machines to do better. I've yet to meet a teacher who likes marking, but these tools are actually pretty good at marking and giving people feedback. They're very good at creating questions. They can offer what we can't do at scale, which what we need to do, which is offer personalization.

    Already, ChatGPT is actually a very good personal tutor. You can sit there and you can ask it questions. You can say, "Tell me how I decline these verbs in German or explain to me the basics of Python programming." And it doesn't matter how silly or stupid or repetitive your questions are. It will sit there and patiently answer them. So it actually makes a really good tool to help the teacher do what they normally do.

    Stela Solar:

    And I really think that this customisation opportunity where AI could help tailor the method of teaching to individuals, to students, could be a great opportunity. There is high neural divergence that quite often is difficult to address in a classroom with many students all being taught by the same teacher, and so this enables us to potentially help teach in new ways.

    I think about one of the golden prompts I keep hearing from everyone is ‘explain X to me like a five-year-old’. Everyone is asking and wants to learn but really needs almost like a personalised tutor or approach to help them comprehend some of these concepts. And I could really see students leveraging that. So it could help students learn things in creative, in different ways that might enable them to learn the topics in a way that suits their differences.

    I also think in terms of learning, the adult education space is also transforming. So we could talk about education generally for students maybe in this context, but the continual learning need of our technological world means that we're always going to be needing to update about some technological advancements or gain new skills. And so if we think about AI augmenting our abilities, AI could be that partner technology that helps us to learn new things, bring in new experiences and knowledge into what we're doing and help us level up our own understanding.

    Neil Martin:

    So in general, would you see entire populations becoming more intelligent, I guess for want of a better word, because of these tools?

    Stela Solar:

    I think it becomes important to actually embed critical thinking into our technological education today. I don't know if it means that the entire population becomes more intelligent, but knowledge potentially becomes more accessible in different ways. Now being able to discern through that, being able to use that knowledge in different ways means the development of critical thinking in probably our education system that is connected to the technology topics.

    Most recently at the Day of AI, which is a full-day hands-on creative experience for students to learn about what AI is and how they can use it and also create it. What we delivered to 85,000 students around Australia is some critical thinking questions and exercises that the students would ask as they're engaging with AI systems - so that they don't just blindly rely on what the AI systems might be saying but can challenge it, can ask who has designed this? What is their motivation? And so that critical thinking becomes increasingly important as the access to potential intelligence also increases.

    Toby Walsh:

    It's not a necessity that the technology is going to make us smarter because we use it well. There are also futures in which we dumb ourselves down. Possibly, we're the last generation that knows how to subtract because we still learn how to do that ourselves. Now we outsource that to machines. I suspect no one remembers telephone numbers anymore. We outsource that to our phones. I suspect we may be the last generation that knows how to read a map, and we're outsourcing that to our devices. And so we could, if we're not careful, use the machines as crutches. It is important that we actually realise that we've got to make choices to ensure that we don't dumb ourselves down.

    Neil Martin:

    You mentioned maps there and that leads me onto another interesting area of AI with regards to transport and driverless cars and automated transport. If we project 20 years into the future, how likely is it do you think that we will have nobody driving a vehicle at all?

    Toby Walsh:

    I think it's going to be like horse riding. If you go back 100, 200 years, we used to get around by riding on horses. And very quickly after we invented the automobile, we stopped riding horses. Horses didn't disappear completely. Horse riding is still a hobby of the rich and wealthy. And similarly, human-driven automobiles are not going to disappear completely. There'll still be petrol heads who love them, but they'll be a hobby for the rich who are willing to pay for the extra insurance premiums. But for most of us, we're going to be sitting in fully autonomous cars, and that's going to be a great benefit.

    Stela Solar:

    And in Sydney, there's already over 40 years of machine learning use in infrastructure and transport in terms of scheduling and traffic and so forth. I particularly see in high urban environments with infrastructure that there will be faster automation and embracing of autonomous vehicles. I think there will be a geographical factor where there might be some locations that the autonomous vehicles may not be as effective as a human driver. There might be certain terrains and so forth where it may not be economically feasible to implement autonomous vehicles.

    I think it's also interesting, Neil, the way you presented the question, not a single person driving and everything fully autonomous. I wouldn't ignore the social preferences, individual preferences, hobbies.Ìý There are already clubs out there enjoying classic cars, and I imagine in the future, it actually might be something of interest or a hobby that people may keep up as well as there being potentially areas where it's not economically feasible to roll out for automation.

    Neil Martin:

    But in terms of the technical side of things, at the moment, it seems there are still some quite big hurdles to overcome, and we see tests which don't quite work, and people get injured or accidents happen. But you would be relatively confident that those will be overcome?

    Stela Solar:

    It's actually fascinating to look at low-tech attacks on autonomous vehicles. I think we're yet to see how it shapes up in the future. And I imagine that through various pilots and testing environments, these probably are likely to be overcome as the autonomous vehicles become more intelligent to understand the world around them.

    Neil Martin:

    Is it quite hard, Toby, to teach a car to drive by itself in a very open environment?

    Toby Walsh:

    Well, there's two significant challenges: one is humans. It's actually much easier if we just had autonomous cars and there weren't any humans because humans are unpredictable. Autonomous cars can just speak to other V2Vs. So that's the first challenge. Actually that's really managing the transition until we get mostly almost all autonomous cars.

    And then the second is the edge cases. So it's very easy. Indeed, today, already autonomous cars, you go out and buy yourself a top end Mercedes or a Tesla. Down the highway, it will drive completely reliably autonomously.Ìý It's the urban environment where you've got pedestrians and cyclists and all of the strange things that happen in the urban environment that you have to deal with. And so we're going to see it rolled out, I suspect first of all it's going to be on the high speed lane of the motorway and maybe it's going to be in congestion free charging zones where you can calm the traffic and you can design the environment as Stela was hinting at to make it easy for autonomous cars.

    And it's going to bring great benefits. We forget that there are large parts of our community who are denied mobility. If you're very young or very old or you have some disability, you don't have the mobility that you or I as adult able-bodied people take for granted, and that will be given to us, and there'll be great economic benefits.

    So for example, the problem of Australia is the curse of distance. Everywhere is a long way from everywhere else. We've got trucks driving around, but trucks are going to be one of the first things to be fully automated because there's going to be great benefits.Ìý If you've got an autonomous truck, it can drive 24/7. So you've already got two or three times the utility out of that expensive vehicle.

    Neil Martin:

    Stela, is there also a social aspect to the change in transport and mobility with regards to maybe where people live, where they choose to live?

    Stela Solar:

    There will be certain urban environments which may be faster to adopt fully autonomous vehicles just due to the infrastructure. It could be socioeconomic profiles, but that does mean that some communities may not see it adopted as fast. But also maybe the energy availability, the landscape just are not as enabling to these autonomous vehicles to really add value to the communities. So there are, due to the nature of how technology is rolled out, there are instantly gaps in experiences, and that means that we may not be enabling everyone with equity and diversity.

    Toby Walsh:

    I suspect it might be like subtraction. People are going to stop learning to drive and indeed, they already are. If you actually look at statistics, the average age at which a person passes their driving test has been going up significantly. It used to be as soon as you were 16 or 17 or whatever the age of driving was in your state, you would learn to drive because that would give you mobility. Now it's in the 20s and 30s in most countries and it's creeping up. People are discovering that owning a car is expensive. There are lots of car shares and there'll be autonomous car shares coming along soon.

    Stela Solar:

    And I think what it also brings in is the question of energy availability to sustain this up-levelling of intelligence across our transport system. It also brings into question, if you have these intelligent transportation vehicles, does that also increase cybersecurity threats? And so there is this increase of consideration both from a security standpoint, from an energy standpoint, that they may actually create some of those resistance points into how quickly autonomy is adopted.

    Neil Martin:

    And all these things cross over and interweave with each other and it's very difficult to extract one strand just from the whole thing. Toby, you mentioned there about people's health. It might bring us onto healthcare and AI. How do you see artificial intelligence potentially revolutionising healthcare and improving people's lives?

    Toby Walsh:

    I think this is one area where there is some of the greatest promise in the sense that it could help us live longer, healthier, and possibly even, if we think about our mental health, happier lives.

    Neil Martin:

    And can you explain exactly how?

    Toby Walsh:

    Yes, because medicine still is stuck in the 19th century. It's still amazing how primitive our medicine is. As a basic example, we still give men and women exactly the same diagnosis and the same medicine, despite the fact that we know their biology is completely different. But we have the opportunity now not just to just properly treat male and female biology differently, but we actually have a possibility to actually completely personalise medicine. We can read DNA and we can do that very cheaply. We can write DNA so we can even go off and correct things that are wrong in your DNA - that's CRISPR technologies and the like.

    We've got this great informational problem, which is how do we understand the hundreds of thousands of SNPs, the genetic letters that make you, you? Artificial intelligence is the technology that's going to help us deal with all that information and work out how to personalise medicine, how to invent new medicines.

    As an example of the opportunity here, the latest antibiotic was discovered by a machine learning program. We're running out of antibiotics. And so we desperately need to discover more new antibiotics and it's costing lots of money. It costs a couple of billion dollars to come up with a new drug these days, and so it's becoming a very expensive, long-winded, painful process. And so the possibility of AI helping us to do this is immense.

    So this new antibiotic has been discovered and patented by some researchers at MIT. It's in clinical trials as we speak and it looks looking really promising. It works by disrupting the ability of the cell to access energy and therefore, the hope is that the drug resistant bacteria are not going to be drug resistant to this one because it's working in a completely different way than we've ever seen antibiotics work.

    Stela Solar:

    And to echo something that Toby touched on right at the start of his comment, the difference in health treatments for men and for women. Most recently, we saw headlines about the female experience of heart attacks and that the symptoms that are manifested are actually quite different from the male experience of heart attacks. So many women have not received the treatment that they've needed because the data around what typical symptoms look like generally are not representative of the female experience. And that's just a very binary, simplistic look of the complexity that AI could unlock. If you add to that the fact that we all have different family histories, DNA, that might react to different medicines, to different potential illnesses, diseases in various ways, that level of complexity is fantastic ground for AI to unlock. AI ultimately is phenomenal at unlocking complexity, finding patterns in vast amounts of data that are beyond human comprehension, and the health space is primed for that.

    So I really see two areas, one to echo Toby around customizable care, but the other one is also availability of quality care. There are firstly not enough medical professionals to provide quality care for everyone around the world, but then there are also a lot of specialisations of the medical profession. And if you imagine a remote community that maybe there is one doctor or medical person that's providing services to that community, if the AI models could augment that person with the patterns to detect specialised ailments, diseases, illnesses and provide care to those remote communities. I really see availability of healthcare to more of our people around the world as one of the key values that AI could also provide.

    Neil Martin:

    I guess this area though is where you get some real pushback with regards to privacy of information versus the obvious benefits that could be provided. Do you think that's something that there needs to be a discussion about or some kind of increased legislation because people's health information is obviously quite personal, and they want to keep it private? Can you understand why people would be worried, Toby?

    Toby Walsh:

    Yeah, there's a slight bit of a boiling frog problem here. We're collecting more data which is offering great opportunities. As an example, all of us are putting smart watches on that are collecting heartbeats, now our blood pressure, and soon, this is going to be pretty amazing, we're going to be able to do real-time monitoring of your blood glucose levels. So if you're diabetic or something, this is non-invasive.

    But also, and this goes to your privacy question, these previously haven't, until now, really been considered to be consumer devices, not medical devices. When you sign the terms and conditions for your Fitbit, Fitbit own your heartbeat. You don't own your own heartbeat anymore. If Fitbit analyse a rhythm in your heart, it's up to them whether they'd bother to tell you. That's not regulated the way that we regulate medical devices and yet, they are transforming in front of our eyes into medical devices.

    Neil Martin:

    So there needs to be some kind of change, I guess, in the way all of that is regulated?

    Toby Walsh:

    I'm not sure it actually needs to change. The health sector has already done a really good job at regulation. Drugs are very carefully monitored. I think we just have to apply those to these what have, until now, been thought of as consumer devices. This is a case of so many of the conversations I have about technology, digital technologies, artificial intelligence, which is that actually there's existing law, but we have to realise it applies to AI as much as it applies to old-fashioned things.

    Neil Martin:

    I guess though from an AI point of view, you want as much data as possible. That's where the benefit of the system comes from, is crunching big lots of information. So the more of that private information you can collect, the bigger the benefits. Is that correct?

    Toby Walsh:

    It is. That's the fundamental conundrum you have here. You will actually want to get data from everyone and anyone and bring that together. And then how do we do that in a way that is appropriate? I think first and foremost, you've got to make sure that the benefits flow back to the people who provided the data. As long as we feel that we're getting better healthcare back because we've shared our data, there's got to be value back to us.

    Stela Solar:

    And Neil, I think you're really alluding to here the frontier of where the technological innovation is, where the governance conversation is because it is a real fine balance between ensuring that the data is protected while also being able to aggregate signals and create value and create those predictive models. And so there are technologies that are being built to address these challenges. I'm really hopeful by some of the technologies to help protect privacy that might inject some white noise into models to break down that identifiable chain so that while there might be some sensitive data in there, it is not connected to any individuals.

    And so I think it's a call to everyone who is listening that this is actually the frontier of where AI innovation and development is developing the tools that will ensure that AI systems are governed in a safe, responsible way that protect our privacy while also creating that value that we want to create for our communities.

    Neil Martin:

    You talk about regulations there, Toby. I wanted to ask you about AI in warfare because I believe that's one of the things that you're really passionate about and the increased regulations that maybe need to be implemented. I mentioned at the top there that you had been banned from Russia. I believe that was to do with a comment that you made with regards to their use of anti-personnel mines that they claimed were using artificial intelligence. Can you just talk a little bit about the ways that you see AI possibly being used in warfare and why that might cause some problems?

    Toby Walsh:

    AI is transforming all aspects of our life and so not surprisingly, it's starting to transform warfare. I'm pretty sure historians of conflicts will look back at the Ukrainian conflict that we see happening today and say like the First World War where we saw the introduction of tanks and machine guns that transformed how we fought, and the Second World War where we saw the introduction of long-range bombers and nuclear weapons that those long-range bombers carried that transformed the way that we could possibly fight war. We will look back and say and see how drones on autonomy and AI started to transform the way we fought war and not in a good way.

    It's not that we're going to remove humans from the battlefield. It's a nice idea, but terribly naive that we could just have our robots fight against your robots and that it will be settled that way. Unfortunately, we don't fight people in those sorts of conflicts. The asymmetric conflicts that we end up in, the other side is not going to sign up to rules about my robot against your robot. If it was that simple, we wouldn't even need to fight. We could decide it by a game of chess or a game of tiddlywinks.

    Neil Martin:

    What is it specifically about the AI implications of those weapons that troubles you?

    Toby Walsh:

    Well, there are a host of challenges. There are moral challenges, there are legal challenges and the technical challenges. The technical challenges of today, you're putting a robot into the worst possible environment. It would be a contested, adversarial setting like a battlefield where people are trying to fool you and that you have no control over a lot of things that are going on. So it's the worst possible place to put a robot.

    From a legal perspective, it violates International Humanitarian Law and in particular, various principles of IHL, things like distinction and proportionality. We can't build machines that can make those sorts of subtle distinctions. And the fundamental challenge, law is about holding people accountable. But you notice I said the word people. Only people are held accountable. You can't hold machines accountable.

    And then finally, from a moral perspective, I think this is actually perhaps the most important and strongest argument, which is that it will transform the way that we go about this terrible thing called warfare. It's terrible that we fight war, but it does seem somewhat inevitable, sadly. But war is sanctioned because it's one person's life against another. The fact that the other person may show empathy to you, that there is some dignity between soldiers. Those features do not exist when you hand over the killing to machines that don't have empathy, don't have consciousness, can't be held accountable for their decisions.

    And so I'm very concerned that we will completely change the character of war if we hand over the killing to machines. I'm quite hopeful that we will, at some point, we'll decide that this is also to be added to those lists of terrible ways to fight war like chemical weapons, like biological weapons and autonomous weapons should be added to that category. What worries me is that in most cases, we've only regulated various technologies for fighting after we've seen the horrors of them being used, misused in battle.

    Stela Solar:

    While I'm not an expert in AI in warfare, it definitely touches on responsible AI and the need for us to really understand that AI is a tool that we're deciding how we use. So whether that's for positive impact or for negative consequences, it is very much about the human accountability of how we use the technology. So AI is only as good as we lead it, and that is why the area of responsible AI is so important right now.

    Similarly, there is a need for governance of AI systems that we're just discovering. AI systems generally are potentially more agile. They are continually updated, continually changing. And so we're just discovering what those governance models look like in order to ensure responsible use of AI tools and technologies. It's also one of the reasons why we've established the Responsible AI Network, to help more of Australia's industry take on some of those best practices for implementing AI responsibly.

    Neil Martin:

    I think that's a really good roundup. We're coming to the end of the time. I just wanted to ask one final question to you both, which was, if you were a 16 or a 17-year-old right now, thinking of going into a career in AI, what would you be most excited about?

    Stela Solar:

    The thing that I would be most excited about is the opportunity to create and shape the world around us. Obviously, we're all already using many AI-powered services that we may not even think about. And I think for today's young people, they absolutely have the opportunity to also design and develop these services, so not only to be users but also creators of new technology. And especially as AI is still shaping up, taking form, we're discovering new things it can do and how we can develop it, this is a prime moment to lean in and steer how it is developed and how it's used rather than step away. And so I invite everyone to steer it, not fear it and be part of this creativity opportunity.

    Neil Martin:

    And Toby?

    Toby Walsh:

    I'm going to mirror those very same sentiments that this is a technology that is going to completely transform our lives in the next 50-odd years, and everything is to play for. And if you come and start working in the field today, you can actually help shape that very future, the very future of how we live, how we work, how we play, how we fight war. All those things are going to be completely transformed by this technology and completely transformed by the people who come and build it.

    Neil Martin:

    It definitely sounds very exciting. Professor Toby Walsh and Stela Solar, thank you very much for a great discussion about the future of AI. I think it's clear artificial intelligence really is going to be a huge influence on every area of our lives in the future if it isn't already.Ìý

    Toby, thanks for being with us and thanks for your time.

    Toby Walsh:

    It's been a pleasure to talk to you, Neil.

    Neil Martin:

    And Stela, thanks for joining us.

    Stela Solar:

    This was a great chat. Thank you.

    Neil Martin:

    Thank you for listening. I've been Neil Martin, and I hope you join me again soon for the next episode in our Engineering the Future Series.

    Voiceover:

    You've been listening to the UNSW Engineering the Future podcast. Don't forget to subscribe to our series to stay updated on upcoming episodes. Check out our show notes for details of in person events, panel discussions and more fascinating insights into the future of engineering.

Listen now