¶¶Òõ¶ÌÊÓƵ

About the episode

The Business of AI Episode 1:ÌýAI and machine learning are already reshaping the way we live and work. How can leaders use it ethically?

Guests:

  • Lamont Tang, Director of Industry Projects, AGSM @ UNSW Business School
  • Professor Toby Walsh, UNSW Scientia Professor of Artificial Intelligence, Chief Scientist for the UNSW AI Institute
  • Stela Solar, Director, National Artificial Intelligence Centre CSIRO's Data61

Find out more about Lamont Tang and Professor Toby Walsh’s work, below:

  • Visit the Business AI Lab:
  • Visit UNSW Founders:
  • Visit the National AI Centre:
  • Read:
  • Read:
  • Read:
  • Read:
  • Read:
  • Read:
  • Narration:

    Chances are you have probably heard about the concept of ethical artificial intelligence. But should we first be thinking about ethical human intelligence?

    AI and machine learning are already reshaping the way we live and work. But while these disruptive technologies can be taught to take the best of human thinking and multiply amplify it, they can also do the same with our worst biases.

    And when prejudice on such a large scale is a real possibility, business leaders can’t afford to simply 'move fast and break things’ - and teach the machines to behave badly on the way.

    Getting AI right is a scary prospect, but an exciting one too. How can leaders use AI in innovative and positive ways, recognise the inherent biases, and minimise harm?

    In this episode, guest host Lamont Tang, Director of Industry Projects and Entrepreneur-in-Residence at AGSM @ UNSW Business School, is joined by Stela Solar, Director, National Artificial Intelligence Centre CSIRO's Data61. He’s also joined by UNSW’s Toby Walsh, Scientia Professor of Artificial Intelligence.

    They discuss how organisations can work to get the best results from AI, while avoiding the worst ethical pitfalls.

    [START]

    ----

    Lamont Tang:

    Thank you for joining us today for this episode of the AGSM Business of AI Podcast, where we'll explore some of the intersections between the business of AI, innovation and ethics. My name is Dr. Lamont Tang, and I'm on the faculty of UNSW where I serve as the Director of Industry projects at the Australian Graduate School of Management and Business School. In this episode, we'll be discussing the business of AI and some of the natural tensions that are coming to the forefront between the increasing pace of innovation in the AI space, as well as providing the guardrails and ethics of AI as we build towards a better future. We'll look at some of the latest trends in AI and the steps that you can take as a leader to ensure that you're deploying AI in an ethical manner.

    Today, I'm so honoured to have two amazing guests, we have Scientia Professor, Toby Walsh.

    Toby's one of the world's experts in artificial intelligence, and while Toby's research is both broad and deep in AI, he's particularly known for advancing responsible and safe AI. We're also joined by Stela Solar, she’s a proud graduate of UNSW and is currently the Director of CSIRO's National Artificial Intelligence Centre. Prior to this, Stela was at Microsoft for over a decade in a variety of roles with her most recent role as a Global Director of Artificial Intelligence Solution Sales and Strategy. So with that, let's start with just telling us a little bit about yourself and, or your role.

    Stela Solar:

    Sure, and it's great to be here. Hello, everyone. So as Lamont mentioned, I lead the National AI Centre, and our mission is to activate positive AI adoption and innovation that benefits business and community. So for us, what's really important is helping more of the Australian SMEs get started with AI, and we're hearing by and large that SMEs are really needing that base level step one around what is the value of AI, what use cases are relevant for them and how to get started. But also we then want to help SMEs connect into the broader AI ecosystem across Australia, and so our ecosystem discoverability portal becomes one of the ways to do that. And then the third focus is to help uplift the practice of AI across Australia, really focusing in on responsible AI, so that's what we're doing, helping organisations get started, get connected, and then uplift their practice.

    Lamont Tang:

    Thank you, Stela. Toby, over to you.

    Toby Walsh:

    Sure. Well, it's wonderful to be back with AGSM. I have the honour of being the Chief Scientist at UNSW's AI Institute. This is a new institute that's been formed. It brings together about 300 academics across all of the faculties at UNSW and about 50 different research groups that have interest in researching and deploying AI. And the primary objectives of the institute are to develop AI in a responsible way to amplify all of the activities in AI going on in the university, and I think what's exciting when we set up the institute was to discover that actually there's far more AI going on outside of the Computer Science School than there is inside that actually AI has left the laboratory, it’s in every faculty, it's in the graduate school. It's in the Materials Science, it's in Medicine, of course. It's in the Art School, it's in the Law School, it's everywhere now, and we are trying to provide a front door to that. So one of our other important aims is to be a place where someone in some external organisation be that business or government has a problem, they think AI might be part of the solution. Where in a big university like UNSW do they come? Well, the AI Institute is that front door.

    Lamont Tang:

    Thank you, Toby, and thank you, Stela. So why don't we start off, let's broadly introduce to the audience what is AI? Where do you find value? And particularly for those SMEs and the layperson, what are some specific case studies that you might be able to provide for the audience to help them understand where the value is?

    Stela Solar:

    Maybe I can get started with a perspective which is, AI is just as much theoretical, creative, and philosophical as it is technological. And so my perspective comes from more of an industry centric perspective, which is about what AI can do today. And that generally is the field of AI called Narrow AI. That is the only thing possible today, which is where AI can be trained to do a specific thing, and that specific ability can then be used in processes, in systems to help create some kind of experience or outcome or efficiency. I see a lot of benefits that AI could bring and maybe we can talk about it at different altitudes for individuals, and I realise there are many benefits. One of the ones I'm most excited about is that AI could help with accessibility of services, benefits and experts.

    You might have heard of CSIRO's seven megatrends that will shape our world over the coming decade. I think about one of them in particular, which is the escalating health imperative, so there are physically not enough medical experts to provide quality care for every person around the world that needs it. And I'm incredibly excited when I see outcomes such as AI being able to detect anomalies in x-ray scans more accurately than what a person might. Similarly, I'm very excited by some of the potential more experimental use cases and the promise that they're holding such as, we heard about the AI Lawyer, and while AI Lawyer hasn't gone ahead, what is intriguing about that promise is that this expert skill could become more accessible to more people so that those who might be in a different socioeconomic situation, that they can still get the benefits of some of this expert knowledge.

    That's where I see some of the benefits for individuals. For companies, it has already been found that the more organisations use AI, the more benefits they experience, and the kind of benefits include higher customer satisfaction, faster decision making, more innovative products and services and so on. So suddenly what we realised is that the AI itself, it was never about the AI itself. It was about the business outcomes that it creates for organisations and the benefits that AI can bring to organisations is competitive advantage in the market. If an organization's able to provide superior customer service or more innovative products and services, that will put them ahead of others. And then there's also one final lens, which is the holistic lens of how AI can really benefit our society, our planet. My hope is that AI can help us become better for the planet. In particular, I see AI helping us do things a little bit differently, more efficiently.

    For example, I just saw a use case where in the Agricultural sector, computer vision on agricultural machinery is able to do more precise pesticide treatments rather than having to spray the entire field and having much more consequence that way. I'm also fascinated by some of the designs that generative AI comes up with, some designs that look just strange to us, but for AI, it has optimised the design in a way that will create those efficiencies that we just could not have thought of before. And so I really hope that AI helps us do better for the planet and for our society.

    Lamont Tang:

    Thank you, Stela. And Toby, do you have any comments or thoughts on this subject?

    Toby Walsh:

    Sure. Well, Stela has given a fantastic introduction, and I'll try and ground that in something very concrete by saying that AI is there to do the four Ds, the dirty, the dull, the difficult or the dangerous. Dirty things like working with Sydney Water to try and find where the pipes are leaking. Dull, doing lots of dull repetitive things, so that's where things like RPA, robotic process automation comes in. If you're doing something and it looks pretty repetitive, then you can get a machine to do that for you. Difficult, I'll mention something from my own work where we've been working, optimising supply chains.

    Looking at companies that have difficult logistics problems and working out those difficult, what we would call travelling salesman problems where you're having to work out where to route the trucks most efficiently and save them not just lots of money, but lots of fuel and ultimately lots of CO2 because most of the saving is in carbon that they never burn. And then dangerous, I mentioned colleagues at Data61 are building a mine clearing robot. Perfect job for a robot. You give it some smarts so they can go in and identify and diffuse mines.

    Lamont Tang:

    Thank you Toby, you know some of the broader themes that we want to talk about today are the ethics of AI. So you've provided some of the benefits, some of the broad use cases. If we can just from an industry perspective, Stela, where do you see some of the pitfalls that are starting to surface?Ìý And then maybe Toby, you can go a little deeper into the systems that you're building in safe AI.

    Stela Solar:

    Yeah. And I will share just a couple because there are quite a number of areas to explore here. One of the things that's important to connect is data and AI. So AI systems are really reliant on the data that they're trained on, and so one of the major areas to mindfully navigate is the data that AI systems are built upon. What kind of quality data, are there biases latent in the data, is there under-representation of certain data sets. And so this is an area to very mindfully navigate where we need to do a lot better in having complete trust in robust data sets, so data is one. Another one is, and it's connected to this data notion, if AI models are built on data and data by default is historical, our history has not been equitable or fair. And so straight away, if the models are built on this non-equitable data, it means that we risk the potential to propagate biases into the future.

    And so one of the ways that we can actually intercept the propagation of biases is by ensuring that there are diverse teams who are co-designing and developing the usage and the technology of AI. I really see diversity not only as a thing that we should do because everyone should thrive in this world and have opportunities. It's also a fundamental business strategy to intercept biases and stay out of the headlines because with diverse teams, there is more eyes looking out at the risks, at the opportunities and be able to navigate that ambiguous space. So those are the two, data and I would say the need for diverse teams to navigate some of those risks of propagating bias.

    Toby Walsh:

    Diversity is a fantastic thing to focus on, and it's worth pointing out that we really struggle to deal with diversity in the field. It's despite all of the efforts that get put in, but to improve diversity, we still only have about one in five people in the field who are female. Many minority groups, people of colour, and other groups are poorly represented, and so it's something that you've really got to put a lot of effort into. If you're working on a project right at the beginning to actually put the effort into finding that diverse team and it's not easy, but it's worth the payback. The other thing I want to emphasise is that the technology is not magic, nothing really particularly special about it.

    And the questions that actually you ask are probably the questions you should have asked about any technology, whether it be 5G, mobile, the web or whatever it was. You should be asking exactly the same questions. AI does change things because it may change the speed, scale or cost that you could do things, and so now you can do things at a greater scale, the harms may be greater as a consequence. But the fundamental questions about fairness and equity, and privacy and so on are the questions that should have troubled you whenever you’re introducing a new technology.

    Lamont Tang:

    Thank you, Toby. And maybe along those lines, I believe in your book Machines Behaving Badly, you introduce five things you should keep in mind when you introduce new technology. Maybe you can talk a little bit more about that?

    Toby Walsh:

    There’s a fantastic essay. It's by Neil Postman, who was a wonderful humanist, and he gave five lessons about technological change. He gave this back in the '80s, and actually he wasn't talking about artificial intelligence, he was talking about computing, I think, at the time because AI wasn't leaving the laboratory back in the '80s, and he gave five lessons about technological change. Talking about the fact that we do tend to idolise technology, and that there are very common lessons to be learned from technological change that you should always ask who are the winners and losers? This is one of the lessons. There are always winners and there were always losers, and so the benefits and the costs may not be equally shared. It's an essay that you can find on the internet. There's lots of transcripts of this essay, and so I strongly encourage our listeners to just type in Neil Postman, five lessons, and I'm sure it will be the first hit on whatever search engine you're using.

    Lamont Tang:

    Thank you, Toby. And I'd love to delve deeper into this in terms of the cost, and I think there's an academic notion called affordances, or with each technology there's a trade-off. What are some of the benefits that you're seeing? What are some of the costs, or what are the trade-offs that you think leaders should be considering at this moment in time?

    Stela Solar:

    I truly believe AI is only as good as we lead it, and that's what we're seeing even across industry, that when organisations adopt a mindful leadership approach to the way they're designing systems and developing AI technology that better outcomes are had, like outcomes that are creating futures where there might be less of the bias represented in the data or there might be more positive outcome services that are provided to the customers and communities. And so right now it's imperative for leaders to actually step into the role of leading and shaping how AI is used across their organisation.

    Quite often what we've seen is that AI technology can be given to the Technology Department to lead, or it might be given to the Research and Development Sector, but actually it is business leaders in particular who will want to lean into this. The accountability does sit with the organisation, and business leaders have this tremendous ability to look end to end at the infrastructure that's available at the customer outcome and experience at maintaining that market reputation while also innovating and creating that competitive dynamic. And so I would say that, that would be one of the really important factors.

    Lamont Tang:

    What are some key trade-offs or frameworks that leaders can take away from this conversation on how they might think about making decisions for their organisations?Ìý

    Stela Solar:

    Yeah. This is a really rich topic, and I can't really point to one framework or one kind of set of considerations, but how leaders do AI mindfully and responsibly is an area that we globally really need to unpack very, very rapidly as AI wave builds. And so one of the ways that we're approaching this at the National AI Centre is we have launched the responsible AI network and it is demystifying what responsible AI means in five very tangible, concrete areas. One of them is the law because all responsible AI starts off with the law that is very much ingrained in our social and commercial fabric.

    Second one is standards, and there is a wave of standards coming that are really going to help the commercial sector to level up some of the practices of AI. Third one is principles, so when to use ethics principles to navigate some potentially unknown areas. The fourth one is around governance, so exploring exactly what kind of governance models are meaningful for organisations to adopt, and right now there's a vast variety of different approaches. And the fifth one is leadership. What kind of questions and approaches leaders need to take on in order to lead AI mindfully? So this area is ripe for exploration right now, and we hope that the responsible AI network will help us get some tangible answers faster.

    Toby Walsh:

    I think it's worth adding that responsibility can actually be seen as a business advantage. I'm convinced that there will be companies in the future that gain greater business because they have taken a more responsible stance than their competitors, and you already see this unfolding. You already see there are certain tech companies that treat your privacy with much greater respect than others, and that is a business advantage. Your customers, I think, are going to start increasingly voting with their feet and choosing those businesses that do behave with technology more responsibly than others.

    Stela Solar:

    And what an area to actually really embrace that competition. Imagine organisations all around the world competing, who can do things more ethically, more responsibly, establishing greater trust? This is where we want competition to develop those ways, methods and frameworks for how to do responsible AI well.

    Toby Walsh:

    And it's worth pointing out that humans, the competitors, the machines are terrible decision makers. We are full of subconscious and conscious biases, and behavioural psychology is a catalogue of the ways that we make irrational and poor decisions. And so there is the possibility to make much more evidence-based, much data-driven decisions with the aid of machines who could look at obviously data sets larger than humans can possibly look at, so there is the potential actually ultimately to make much better decisions than even humans can make.

    Lamont Tang:

    Thank you. And are there some specific case studies that you guys can point to? I know for example, Stanford has done some research on bias in how these AI algorithms are outputting their data sets. What are some standards that have been coming out or some case studies that you can talk to?

    Stela Solar:

    I have two examples that I think would be really meaningful. One of them, and I'll talk to it generally so that we don't reveal any sensitivities. There is a large financial and professional service organisation who provides a solution in the financial sector for loan approvals, and this was built off legacy data and the data would be contributing to this algorithm that would decide whether someone gets a loan or doesn't get a loan. So this large global organisation decided to look into the data set that was informing the loan approval model, and identify whether there were biases in that that were informing or influencing the outcomes, so they leveraged one of the available toolkits out in the market today, that's open-source Fairlearn. They leveraged this to actually look at the data set that was informing the model, and they did find a bias that was creating a disadvantage for women during that loan approval process.

    And then this same toolkit was also able to counter that bias, and I think they reduced it from a 7% bias to a 0.3% bias, so it's much more equitable in terms of loan approvals. And so this kind of deliberate attention that leadership would give to how the system is designed, what data it's using and what outcomes it creates is incredibly needed at this point in time, and there are some tools already out there to help organisations find those biases in the data. Another example, and it's a slightly different approach.Ìý

    HIVE is a project that is running at the Royal Perth Hospital, and it's providing remote patient care so that expert care and attention is able to be given to more patients rather than not potentially having enough doctors in the location that the patient is in and not having enough attention there on the health.

    So what they actually implemented was an advisory process with the community that group serves, and so there is this community advisory board that convenes to shape how the HIVE technology is used, but what was even more impressive is how they connected this convening of the community advisory board to their machine learning lifecycle. And that helped to shape it in a more inclusive, fair and transparent way, so trying to counter some of those biases and having transparency. It did that by connecting the community advisory board to the machine learning lifecycle model and to make it very tangible, here's one example. Whenever there was data drift, data drift basically means the data that is informing the model has changed and it's suggesting something else, so whenever there is data drift, that would be a trigger to call for a convening of the community advisory board. That's impressive, and that's how one organisation approached this.

    Toby Walsh:

    These examples, I think, illustrate that these go beyond just simple mathematical questions, but the questions actually hide much deeper fundamental, societal, philosophical questions about equity. The only actually fair way to decide whether someone gets loan is to toss a coin. Let me illustrate with an example, so suppose you are a company, and you decide that you are receiving too many CVs every day that you can't possibly read all the CVs of people applying for jobs.

    So of course, you decide to give this to some machine learning, get it to read all the CVs and I mean, you can train it on data and of course, you've got historical data to train it on and you've got to be very careful because you know that historical data is biased, it's sexist. You've tended to hire more men than women. You've tended to promote more men than women. So looking at the ranks of the people employed, and the people promoted in your organisation will teach this algorithm to be sexist, and so you are very careful not to propagate that sexism from the past into the decisions you're making in the future. You don't include gender as one of the inputs, so the system doesn't learn to pick up those historical biases.

    But very quickly you discover that the system is very good at picking up other correlations and so it picks up other correlations, the sorts of maybe playing hockey which, perhaps, more girls do than boys, suddenly it picks up and therefore learns a proxy to gender. And so, despite all your best efforts after several years of efforts and millions of dollars you've invested in, you actually decide that the system is irretrievably sexist and you throw it away, and these are very difficult questions. Ultimately, you're only going to employ a few number of people. You're not going to employ everyone who applies, and so you want to employ the best people, but what does best mean? That's not an easy mathematical problem. It's a question that goes to the heart of what is an equitable society. We've decided rightly so, that it shouldn't be based upon gender, so that gender should not have any say on it.

    Should it be based then, perhaps, on your school grades? Well, I mean, there's some measure of your ability to do the job, but we know that the school grades are also socioeconomic that people who went to better schools, people who've come from wealthier backgrounds may have gone to private schools and have got better grades, so we're only reflecting those socioeconomic biases within our society. We are never going to employ everyone to know who is the best person for the job, so we have to decide do they align with the standards of our society that they're not based upon gender, they're not, perhaps, based upon age and they reflect the sorts of criteria that we decide are acceptable criteria from which to choose the small subset of people, which is a bias. And the old-fashioned name for machine learning is inductive bias. You're predicting on new data that you haven't seen. That's what we call in mathematics, an inductive bias.

    Stela Solar:

    It almost brings the question forward of, is it really responsible AI that we're talking about here? Or is it responsible human? Because so much of it is dependent-

    Toby Walsh:

    Yes.

    Stela Solar:

    ... on what we choose, what we decide, what values, what decisions we put in the system.

    Toby Walsh:

    It is. But what managers listening to this programme have to realise is that when you hand these decisions to machines, or at least getting advice from machines, the machines do things exactly as you tell them to do. So now they put these questions on steroids, they really make it very precise. What are the biases that you are prepared to put up with? What are the biases you're trying to eliminate here? Because you have to be very precise in programming the machine because it would do only and exactly what you tell it to do.

    Lamont Tang:

    Yeah. This is a fascinating topic. And just to build on this, so from a practical point of view for leaders that are in charge of data driven organisations, should they take a Hippocratic Oath? Or who's responsible for the education of the people in these organisations, whether it's the engineers, the designers, everybody, and what does that look like when you implement this at scale?

    Stela Solar:

    I've seen a Hippocratic Oath discussion already. There's a lot of folks who believe that this should be done. There's even a draft of an AI Hippocratic Oath out there that folks could read. And we already see these kinds of precedents in obviously medical profession, but also in various societies. If you look at Australian Computer Society, there's a code of conduct for the organisational members. What I believe is needed is more tangible, practical examples of how to actually implement this thing that you have taken an oath to do, and that's where there is a gap right now, it's in the how. How to implement things responsibly? What is that checklist that people go through? What are the questions to ask? That is where the gap is. I do believe most folks have a positive intention in using AI, and unfortunately, most of the challenges with AI are inadvertent.

    They're non-intended consequences that people have gotten to while designing AI systems because we did not have that applied prescriptive guidance to do so. So I, again, think that a lot more work is needed in the practical side of things of how to apply AI in a responsible way, in an ethical way. I think the Hippocratic Oath is a good symbolic gesture that makes us think first of responsible AI, but there's still this gap of how to actually do it, and that's where the standards that are coming I think are going to be helpful. And then we're needing a lot more work across industry research, academia in the space of developing approaches to applying AI in a responsible way.

    Lamont Tang:

    Toby, do you have anything you'd like to add?

    Toby Walsh:

    Yeah. Stela put her finger I think, on two really important things. I mean, one is the unintended consequences, and so often people had good intent, but things work out in unexpected ways. And one of the real challenges is that we're talking about technologies that you can field very rapidly. We've never had technologies previously where you could touch a billion people in the matter of a few weeks. Well, now if you are a company like TikTok or Facebook or whoever and you roll something out, it's soon in the hands of billions of people, and so even small, very minor effects can have very large consequence and so we have to be very aware of that. And then there are the knock-on effects. So, take for example, autonomous cars.

    We're going to have self-driving cars very soon. Well, the obvious consequence is very easy to expect. We're going to be spending less time driving, which I think is fantastic because I hate driving. I could spend more time doing things that are useful. The unintended consequences are ones that we have to start thinking about. The car insurance business is going to, perhaps, take a bit of a battering, I suspect because cars aren't going to have all those accidents because it's human drivers that cause those accidents. There are lots of unintended consequences, lots of ramifications of these technologies that we have to think carefully through that will often surprise us.

    Stela Solar:

    And there's one case study in particular, or example in particular that just came to mind which is the checklist approach. I'm not sure how many folks on the call might be aware of the checklist manifesto, where it also talks about the use of the checklist in surgical procedures and how much improvement that created in terms of reducing error rates and infections and so on. And that is the area that is also needed with AI. And so when it comes to an organisation that is going down this AI path, it is important to create structured approaches to AI across the organisation. Some kind of a vetting of and standardisation of the platform that you decide for your organisation, a packaging up of low-code, no code tools to enable business users to leverage AI tools as well as fully custom tools, but having a structured approach, having a centralised governance approach to AI is going part of that way to creating this checklist approach that's going to lead to a better place of AI outcomes.

    Lamont Tang:

    We've covered a lot of ground here. I'd just like to wrap up maybe with one or two last thoughts you guys might have. I know, Stela, you've got a great mantra. Maybe you can tell us a little bit about that?

    Stela Solar:

    Yeah. ‘Steer it, don't fear it’. I think all of us will want to be more informed about AI, we'll want to try new AI technologies such as the great wave of engagement that chatbots have brought about. Obviously, they're not an incredibly mature technology yet, but they have really sparked our imagination of what could happen with AI. And so I would encourage everyone to really engage with AI technologies today, engage in the conversation also in how AI systems might be designed across your organisations because we are empowered to shape how this technology evolves, to shape how we decide to use this technology. AI is not some being that's just landed on this planet. It's a tool that we are deciding to use and build and shape in a meaningful way for us, and so please really lean into the AI technology, learn what it is, contribute to how it is designed and used across your organisation so that it can create that value for business and for community.

    Toby Walsh:

    I want to concur with those fine messages that technology is not destiny. Technology shapes society, it shapes society in quite profound ways. And it seems to be, AI is one of those technologies that's going to have a very profound impact upon our society, but equally, society gets to shape technology. We get to choose where we introduce technology into our lives and where we don't, to the how and how we do not. And these are really important conversations that we should all be having. Unfortunately, in the past, it's been too often white male people in rooms in Silicon Valley like myself, who have been making many of the decisions. And since this technology is going to be touching all of our lives, all of us should be involved in the conversations, so there is a lot to be talked about, and I encourage everyone to join those conversations.

    Lamont Tang:

    Thank you so very much. I think maybe the last thing that'd be very useful for our audience is, how do you guys keep abreast of all the fast-paced innovations that are happening in AI? What are some books or resources that you can point our audience to?

    Stela Solar:

    I follow Toby.Ìý

    Stela Solar:

    Toby is sharing some fantastic articles on Twitter and LinkedIn. The space is moving a lot, and so I personally have alerts set up for every time something with AI is published or artificial intelligence is published because there is just such high volume and it's very decentralised. So, it's coming from formal, professionally peered publications as well as industry publications. So, for me, I've set up an alert on the hashtag.

    Lamont Tang:

    And how about you, Toby?

    Toby Walsh:

    Well, I'm unashamedly going to give a plug for my new book Machines Behaving Badly. So if you want an introduction, we've only touched upon some of the topics today. There's a very broad spectrum of things that you should be thinking about. Then my book, perhaps, provides a broader introduction to those topics. But as Stela says, it's incredibly fast moving and there are stories breaking every day of new interesting developments. So I have to say, I've worked in the field for 40 years now, ever since I was a young boy reading too much science fiction. And it's the most exciting time to be in the field because we are making really significant progress, and the technology is, as I said at the start, leaving laboratory and being picked up and used by so many different businesses, by so many different parts of government, and by so many different people in their everyday lives. So it's a really exciting time to see how the technology is changing our lives.

    Lamont Tang:

    Thank you Stela and Toby. We've discussed many ways that AI has transformed our industries, our business and the importance of responsible and ethical AI.Ìý Let’s look towards building a better future together.

    Stela Solar:

    Lovely to be here. Thanks for having me.

    Toby Walsh:

    It's been a great pleasure.

    Lamont Tang:

    You can connect with Stela and Toby on LinkedIn or Twitter.

    And you can follow the National AI Centre on LinkedIn for the latest updates from Australia's AI Ecosystem Programs and Events.

    Ìý

    Narration:

    Thank you for joining us for AGSM’s The Business of AI: AI and Ethics

    Want to learn more about our research and work in the area? Check out the show notes on our website to get more information on the research discussed on today’s podcast. Just search the-business-of-podcast online.

    Or, drop us a line if you have feedback at brand@agsm.edu.au

    New to the podcast? There’s a whole catalogue for you to explore. From mental health and AI to the use of AI in finance and banking, you can check them all out today.

    Please share, rate and review and subscribe to AGSM’s leadership podcast on your favourite podcast platform and look out for future episodes.Ìý

    In the meantime, follow AGSM at UNSW Business School on LinkedIn and Facebook for more industry insights for an accelerating world or find us atÌý.

Listen via streaming services