¶¶Ņõ¶ĢŹÓʵ

About the episode

Artificial intelligence becomes more powerful by the day ā€“ which means the dangers that come with it are growing too. If youā€™re using AI tools in your business, you need to know how to use them safely and responsibly.

So where do you start? Dr Catriona Wallace, a world-leading AI expert and Adjunct Professor at AGSM, shares a framework for using AI ethically, and explains why that responsibility falls on the shoulders of leaders and not just tech teams.

Stela Solar, Director of the CSIROā€™s National Artificial Intelligence Centre, thinks leaders using AI ā€˜the right wayā€™ are already experiencing a competitive advantage. That includes Dimitry Tran, who owns three healthcare businesses powered by AI.

In this episode of The Business Of youā€™ll learn about how to use AI to get ahead without compromising on safety. If you want to dive deeper into using AI for business, listen to previous episodes of The Business Of, featuring , and .Ģż

Want to know more?Ā 

For the latest news and research from UNSW Business School and AGSM @ UNSW Business School, to our industry stories at and follow us on LinkedIn: and .Ģż

  • Stela SolarĀ  00:07

    The more organisations use AI, the more benefits they experience, and the kind of benefits include higher customer satisfaction, faster decision making, more innovative products and services and so on. My hope is that AI can help us become better for the planet.

    Dr Juliet BourkeĀ  00:27

    Stela Solar from CSIROā€™s National Artificial Intelligence Centre says A-I has incredible potential for business leaders. But, are we ready for it?

    Catriona WallaceĀ  00:37

    My experience is kind of the higher we go up in an organisation the less they know about this, but it's definitely something that should be at the board and at the executive team level.

    Dr Juliet BourkeĀ  00:47

    Thatā€™s Dr Catriona Wallace...Ģż sheā€™s an AI and metaverse specialist, and an adjunct professor at AGSM. Sheā€™s worried that even though businesses are keen to implement A-I as quickly as possible, the people in charge donā€™t quite understand the technology.

    Catriona WallaceĀ  01:03

    In speaking to some of the engineers, they tell us that often, the responsibility for doing ethical AI is pushed way down to them. And that they are required to know how to code ethically, or to make sure that the data sets have no bias. And they don't believe that the senior management really has any idea about this.

    Dr Juliet Bourke

    Catriona says it's a dangerous situation...

    Catriona Wallace

    ...to be delegating your ethics and responsibility to your engineers, who are very well intended, I'm sure but also have huge pressure on them to be finishing code, to be shipping product, to be doing things efficiently to be working under their agile planning frameworks, etc, where it may not be the place where they start and have time to think about how they will do it ethically.

    Dr Juliet BourkeĀ  01:54

    This is The Business Of, a podcast from the Business School at the University of New South Wales. Iā€™m Dr Juliet Bourke, a Professor of Practice in the School of Management and Governance.

    AI has massive potential in the business world, but that potential comes with risk; and I mean serious, existential risk. So before we learn how to operationalise AI, we need to learn how to use it ethically.

    Catriona WallaceĀ  02:23

    So, I believe and I think most of the big AI thinkers in the world share the same view that there will be a very light side to AI, Ā but it will have an equal dark side. And the dark side is largely because this type of technology is very difficult to understand, to explain, and also to control. But there's also a much bigger risk that is playing out at the moment. And there's been a very, very good book by Toby Ord, an Australian who is at Oxford University, called Precipice, and it talks about existential risk. And in this book, Toby Ord identifies there are around six core existential risks, an existential risks being will something destroy humanity, kill everyone by the end of the century, or will it severely reduce humanity's potential. And if we look at the existential risks, they are nuclear war, climate change, asteroid colliding with the earth, pandemic, bio-engineered disease, and artificial intelligence.

    Now, these first five have a risk factor according to the academics in this field of about a one in a thousand to a one in a hundred thousand chance that any of these, including climate change, will destroy humanity by the end of the century. Artificial intelligence, however, is not a one in a thousand chance. It is a one in six chance that AI will cause or go near to causing the destruction of humanity by the end of the century. So, for me, there's a bigger core here. We absolutely need to start regulating, monitoring this technology because it's not just our businesses are at risk or not getting a credit card is at risk, there's far greater stakes. And AI is now regarded as one of the most serious threats to humanity unless it is controlled. And then, where's the leadership? It's not coming from the tech giants.

    It comes from the business schools, organisations such as mine. It comes from your students. It comes from business leaders who need to step into this ethical leadership, start to learn about this, understand both the benefits and the risks that this technology is bringing.

    Dr Juliet BourkeĀ  04:41

    So the risks are real. And the responsibility to mitigate them falls on the shoulders of leaders like you. But where do you even start? How do you actually use AI responsibly?

    Catriona WallaceĀ  04:53

    There are good guidelines available now for enterprises or tech developers to start to look at what are the core principles of doing AI ethically responsibly. And thinking about the purpose of this framework is to help organisations avoid unintended harms. So the first principle is that AI must be built with humans, Ā society and the environment in mind, so it must not come at a cost to those three groups. Second principle is that AI must be built with human centered values in mind. Third is the AI must be fair, it must not discriminate. Fourth is the AI must be reliable and safe. Fifth, it must adhere to privacy and security requirements.

    Dr Juliet BourkeĀ  05:43

    Now the next principle is about contestability. Catriona says someone must be able to challenge a decision AI is made against a person or a group. And she uses the example of an apple credit card launched in 2019. That in part used algorithms to determine the applicants credit limit the problem, the algorithm was much more generous with its credit for its male applicants. And it caught the attention of Apple co-founder and millionaire Steve Wozniak and his wife, who, despite having joint bank accounts, was issued far less credit.

    Catriona WallaceĀ  06:20

    She's got her 10 times less credit than her husband, and she's pretty annoyed. So, Mrs. Wozniak goes, "Hey, I'm really unhappy about that. I'm going to contest this because I think I've been unfairly treated or unjustly treated." So, contestability. So, enterprises then must have a contestability path for, say, consumers in this case who have been unfairly treated.

    Now, if you think about this, that's one Mrs. Wozniak, that application went global. Let's go on scale all of the other women in this case who felt unfairly treated, turning up to Goldman Sachs and Apple saying, "Right, we need to contest this. We're unfairly treated. What is the process?" The organisation is going to have to handle that. So, that's contestability, number six.

    Then it gets more tricky. Number seven is the AI must be transparent and explainable. It's hard for the programmers to do that. The traditional AI is what we call black box AI which is sort of unexplainable AI, and what we're looking for in the future is organisations to be building white box AI. So, you can take the lid off, look in, and actually see how the algorithms are working. And so, transparency.

    And then explainability. So, not only they have to show it, the company would have to be able to explain to Mrs. Wozniak what happened. This is how it made its decisions. Now, again, anyone who knows anything about machine learning knows that that's enormously difficult because as these machines are learning and adapting and learning and adapting each task, then they kind of take a bit of a path of their own, and sometimes it's enormously difficult for organisations' data scientists to explain what their algorithm has done.

    And then the last one is accountability. So, if that organisation has caused some damage or unfairness to Mrs. Wozniak, then they need to be accountable for that, and also the vendor who provided the technology that did the harm needs to be accountable, and likely there needs to be some reparation.

    Dr Juliet BourkeĀ  08:26

    If AI poses an existential threat, like Catriona Wallace suggests, and it's difficult to use responsibly, why are so many businesses jumping on board with such enthusiasm?

    Stela SolarĀ  08:37

    For companies, it has already been found that the more organisations use AI, the more benefits they experience, and the kind of benefits include higher customer satisfaction, faster decision making, more innovative products and services and so on. So suddenly what we realised is that the AI itself, it was never about the AI itself. It was about the business outcomes that it creates for organisations and the benefits that AI can bring to organisations is competitive advantage in the market.

    Dr Juliet BourkeĀ  09:09

    Stela Solar from CSIRO says she's seen businesses streak ahead of the competition, when using AI the right way. She hopes we'll see the same outcomes on a global scale.

    Stela SolarĀ  09:21

    And then there's also one final lens, which is the holistic lens of how AI can really benefit our society, our planet. My hope is that AI can help us become better for the planet. In particular, I see AI helping us do things a little bit differently, more efficiently.

    For example, I just saw a use case where in the Agricultural sector, computer vision on agricultural machinery is able to do more precise pesticide treatments rather than having to spray the entire field and having much more consequence that way. it has optimised the design in a way that will create those efficiencies that we just could not have thought of before. And so I really hope that AI helps us do better for the planet and for our society.

    You know I truly believe AI is only as good as we lead it, and that's what we're seeing even across industry, that when organisations adopt a mindful leadership approach to the way they're designing systems and developing AI technology that better outcomes are had, like outcomes that are creating futures where there might be less of the bias represented in the data and so right now it's imperative for leaders to actually step into the role of leading and shaping how AI is used across their organisation.

    Dr Juliet BourkeĀ  10:46

    Operationalising AI requires leaders to continually stop and ask themselves, is this ethical? And what are the implications for our business, our customers and clients?

    Stela SolarĀ  10:57

    What I believe is needed is more tangible, practical examples of how to actually implement this thing and that's where there is a gap right now, it's in the how. How to implement things responsibly? What is that checklist that people go through? What are the questions to ask? That is where the gap is. I do believe most folks have a positive intention in using AI, and unfortunately, most of the challenges with AI are inadvertent.

    And there's one case study in particular, or example in particular that just came to mind which is the checklist approach. The use of the checklist in surgical procedures and how much improvement that created in terms of reducing error rates and infections and so on. And so when it comes to an organisation that is going down this AI path, it is important to create structured approaches to AI across the organisation. Some kind of a vetting of and standardisation of the platform that you decide for your organisation, having a centralised governance approach to AI is going part of that way to creating this checklist approach that's going to lead to a better place of AI outcomes.

    Dr Juliet BourkeĀ  12:11

    Stelar says the key the business is to...

    Stela SolarĀ  12:15

    ...stee it don't fear it. And so I would encourage everyone to really engage with AI technologies today, engage in a conversation also in how AI systems might be designed across your organisations. Because we are empowered to shape how this technology evolves to shape how we decide to use this technology. AI is not something that's just landed on this planet. It's a tool that we are deciding to use and build and shape in a meaningful way for us. And so please really lean into the AI technology learn what it is contribute to how it is designed and used across your organisation so that it can create that value for business and for community.

    Dr Juliet BourkeĀ  12:55

    AI will be one of the most powerful influences on our planet. And we need to make sure that that influence is a good one. As Catriona Wallace demonstrated, there are already advisory bodies compiling clear, practical, and actionable principles for ethical AI use. But what does ethical AI use look likeā€¦ right now, in the real world?

    Dimitry TranĀ  13:18

    What we do is we provide a co-pilot that can detect findings alongside the doctors. For example, sign of pneumonia on a chest x-ray, or sign-up stroke on a CT brain, and that will help the clinician to make more accurate diagnosis on a timelier manner.

    Dr Juliet BourkeĀ  13:33

    This is Dimitry Tran, he runs three health-care technology companies that use A-I. In a previous episode of The Business Of, Dimitry shared how tools are improving workflows for healthcare professionals today.

    Dimitry TranĀ  13:45

    I was recently talking with a clinician in Australia, a radiologist, and she told me that every day she starts her day staring down a list of 500 cases that has been backlogging since yesterday and since overnight. She said she lose the will to work because she knows that she can try very hard, and bring it down to 200, but that tomorrow you start her day again with 500. I think each of those cases are someone mother, someone father, someone loved one that require the best of care yet our resources are so stretched that we are having someone who have to deal with hundreds of cases every day and I think that is where AI plays such an important role.

    Our AI can process those 500 cases in a few second, and then we allow the clinician to sort their work list like an Excel worksheet, to sort and say "Which cases?" Maybe Case 399 is a case that contained a critical finding, the stroke patient that need immediate care the next minute

    Dr Juliet BourkeĀ  14:46

    In an industry like healthcare, there are regulatory boards that assess the technology to make sure that it's fit for the sensitive work that it's doing. But even if it gets past approvals sometimes just like technology, it's ever-evolving.

    Dimitry TranĀ  15:00

    I think when it comes to clinical care, the first hurdle to pass is regulatory. And the regulators are very sophisticated around AI. They have amazing data scientists, they have, you know, a panel of PhDs. So they asked very deep question recently, we actually see a lot of AI that even past regulatory approval and get into user hand and stop, and it has we call performance drift over time, because the users use it in a different way, therefore, the population that the data was trained on. So I think this is one of the things that we're still coming to grips with globally as the healthcare industry is how to monitor AI performance in real time, this is not a one off check and youā€™re safe, and from this point on, you can sell to whoever and wherever you want. The AI technology needs to be monitored in real time. That is the feedback that we need to go back and collect more data to keep training AI for the better. So this continuous learning, I think, is a key element of any AI business model.

    Dr Juliet BourkeĀ  15:55

    And then it's about moving that continuous learning into the C-suite.

    Dimitry TranĀ  15:59

    So the whole executive team do not need to become AI expert. We all can learn a little bit about AI, but I think having someone with deep enough knowledge on how to build AI at the decision table, I think, is important. I think AI has been around long enough that there are those people who have been through one or two deployment of AI, or development of AI, that can bring great insight to a discussion, because it's so easy to generalise AI into, "Oh, it's going to change everything. Oh, it's going to be so risky." I think having someone with that experience around the table, I think, would be very helpful to shape the conversation or strategy, or any decision that organisation going to make in this field.

    Dr Juliet BourkeĀ  16:36

    AI will be the defining technology of the next era in business ā€“ but if weā€™re not responsible, the consequences could be catastrophic. AI tools are already helping leaders achieve incredible outcomes, but that success will be short-lived ā€“ and ultimately for nothing ā€“ if weā€™re not careful.

    You can hear more from all Catriona, Stela and Dimity on previous episodes of The Business Ofā€¦ scroll back through your podcast app if you want to dive even deeper into the world of leading through the A-I revolution. The Business Of podcast is brought to you by the University of New South Wales Business School, produced with Deadset Studios. To stay up-to-date with our latest podcasts as well as the latest insights and thought leadership from the Business School, subscribe to BusinessThink at businessthink.unsw.edu.au.

Listen via streaming services