抖阴短视频

Media contact

Neil Martin
n.martin@unsw.edu.au

Lethal autonomous weapons need to be added to the UN鈥檚 Convention on Certain Conventional Weapons, the open-ended treaty regulating new forms of weaponry.

That is the view of 厂肠颈别苍迟颈补听Professor Toby Walsh, chief scientist at UNSW鈥檚 AI Institute, in discussion as part of UNSW鈥檚 鈥楨ngineering the Future鈥 podcast series.

The rules of war, widely accepted under the Geneva Convention that was first established in 1864, dictate what can and cannot be done during armed conflicts and aim to curb the most brutal aspects of war by setting limits on weapons and tactics that can be employed.

Read more:聽

Chemical and biological weapons have been banned for use in conflict since 1925, following the horrors of the First World War, and Prof. Walsh says AI-powered autonomous weapons should now also be prohibited.

The UNSW academic is banned from Russia for questioning the claims of developing an AI-powered anti-personnel land mine that was more humanitarian.

In addition to his concerns about the morality of such weapons, Prof. Walsh says other autonomous weapons that are starting to be used in the Ukraine conflict should be banned.

鈥淎I is transforming all aspects of our life and so, not surprisingly, it's starting to transform warfare. I'm pretty sure historians will look back at the Ukrainian conflict and say how drones and autonomy and AI started to transform the way we fought war 鈥 and not in a good way,鈥 he says.

鈥淚'm very concerned that we will completely change the character of war if we hand over the killing to machines.

鈥淔rom a legal perspective, it violates internationally humanitarian law 鈥 in particular, various principles like distinction and proportionality. We can't build machines that can make those sorts of subtle distinctions.

鈥淟aw is about holding people accountable. But you notice I said the word 'people'. Only people are held accountable. You can't hold machines accountable.鈥

Scientia Professor Toby Walsh

Scientia Professor Toby Walsh, chief scientist at UNSW's Artificial Intelligence Institute. Image from UNSW

Prof. Walsh says that in the fog of war, the use of non-human-controlled weaponry is far from ideal.

鈥淭he battlefield is a contested, adversarial setting where people are trying to fool you and you have no control over a lot of things that are going on. So it's the worst possible place to put a robot,鈥 he says.

鈥淎nd then the moral perspective is actually perhaps the most important and strongest argument against AI in warfare.

鈥淲ar is sanctioned because it's one person's life against another. The fact that the other person may show empathy to you, that there is some dignity between soldiers, those features do not exist when you hand over the killing to machines that don't have empathy, don't have consciousness, can't be held accountable for their decisions.

鈥淚'm quite hopeful that we will, at some point, decide that autonomous weapons also be added to the lists of terrible ways to fight war like chemical weapons, like biological weapons. What worries me is that in most cases, we've only regulated various technologies for fighting after we've seen the horrors of them being misused in battle.鈥

Responsible AI

Joining Prof. Walsh on the 鈥Engineering the Future of AI鈥 podcast was Stela Solar, director of the National Artificial Intelligence Centre hosted by CSIRO's Data61, as they discussed the potential fascinating use of AI in a wide variety of areas such as education, health and transportation.

Solar is involved in the , a world-first cross-ecosystem collaboration aimed at uplifting the practice of responsible AI across Australia's聽commercial sector.

Stela Solar

Stela Solar, director of Australia鈥檚 National AI Centre hosted by CSIRO. Image from Stela Solar

And she agrees it is important that the ever-increasing development of AI is done in the right way.

鈥淭here is a need for us to really understand that AI is a tool that we're deciding how we use. So whether that's for positive impact or for negative consequences, it is very much about the human accountability of how we use the technology,鈥 she says.

鈥淎I is only as good as we lead it, and that is why the area of responsible AI is so important right now.

鈥淭here is a need for governance of AI systems that we're just discovering. AI systems generally are potentially more agile. They are continually updated, continually changing. And so we're just discovering what those governance models look like in order to ensure responsible use of AI tools and technologies.

鈥淚t's also one of the reasons why we've established the Responsible AI Network, to help more of Australia's industry take on some of those best practices for implementing AI responsibly.鈥

* Professor Toby Walsh and Stela Solar were in conversation as part of the Engineering the Future Podcast series.