How will we identify people in need of protection?
Panel Session 2/Scenario 2
Will machines decide asylum claims? Will we still consider refugee applications on an individual case-by-case basis, and if not, what new approaches could we take? How can we respond effectively to mass movements and ensure fairness for people seeking asylum?
Speakers:
Cathryn Costello, Professor of Global Refugee & Migration Law, University College Dublin
Niamh Kinchin, Acting Dean of Law, University of Wollongong
Edward Santow, Director, Policy & Governance, Human Technology Institute, University of Technology Sydney
Shahyar Roushan, Senior Member, Administrative Appeals Tribunal, Migration & Refugee Division
°ä³ó²¹¾±°ù:ÌýDaniel Ghezelbash, Deputy Director, Kaldor Centre for International Refugee Law
-
Cathryn Costello
Full Professor of Law, Sutherland School of Law, University College Dublin
Cathryn CostelloÌýis Full Professor of Law at University College Dublin’s Sutherland School of Law. She was previously Professor of Fundamental Rights, and Co-Director of the Centre for Fundamental Rights at the Hertie School, Berlin and Andrew W Mellon Professor of International Refugee and Migration Law at the Refugee Studies Centre, University of Oxford. She is a visiting professor at the University of Oslo, and has held fellowships at the University of Melbourne and New York University. She is a leading scholar of international refugee and migration law, and has pioneered the study of the intersection of labour and migration law. She is co-editor with Michelle Foster and Jane McAdam of theÌýOxford Handbook of International Refugee LawÌý(OUP 2021). She is Principal Investigator of the RefMig project, examining mobility, status and rights in the global refugee and migration regimes, and is the lead principal investigator of a Volkswagen European challenges project on automated decision-making in asylum and migration, AFAR.Ìý
Ìý
Shahyar Roushan
Senior Member, Administrative Appeals Tribunal, Migration & Refugee Division
Shahyar RoushanÌýis a lawyer with a background in human rights and administrative law. He was appointed as a Member of the former Refugee Review Tribunal in 2001 and subsequently as a Senior Member of the amalgamated Refugee and Migration Review Tribunals. He is currently a Senior Member and the National Practice Leader (Protection) in the Migration and Refugee Division of the Administrative Appeals Tribunal. He is also a Senior Member of the Guardianship Division of the NSW Civil and Administrative Tribunal. Before his tribunal appointments, he worked as a Government Lawyer in Canberra and a Lawyer with the South African Human Rights Commission in Cape Town. He is an Advisory Board Member of ANU’s Centre for Arab & Islamic Studies. He is multilingual and has a keen interest in sociolinguistics.
Niamh Kinchin
Acting Dean and Associate Professor, School of Law, University of Wollongong
Niamh KinchinÌýis an Associate Professor at the School of Law, University of Wollongong, where she has been Acting Dean since 2022. She teaches and coordinates refugee law, administrative law and constitutional law. She was admitted as a legal practitioner to the Supreme Court of New South Wales in 2002. She holds a Bachelor of Social Science from the University of Newcastle, a Bachelor of Laws (Hons Class 1) from Western Sydney University, a Master of Administrative Law and Policy from the University of Sydney and a PhD from UNSW. Her PhD examined the topic, ‘Accountability in the Global Space: Plurality, Complexity and United Nations High Commissioner for Refugees’. Her primary research interests are in refugee law, the impact of technology on forced displacement and migration, global accountability, administrative justice and public law within the international and Australian contexts. Her current and primary research focuses on the digital transformation of the Australian migration and refugee system through a law and justice lens. Other projects include the interpretation of the constitutions of international organizations and the evolution of Australian constitutional principles. She is the author ofÌýAdministrative Justice in the UN: Procedural Protections, Gaps and Proposals for ReformÌý(Elgar, 2018).Ìý
Edward Santow
Industry Professor and Director, Policy and Governance, Human Technology Institute, University of Technology, Sydney
Edward SantowÌýis Industry Professor - Responsible Technology at the University of Technology Sydney, and the Director - Policy & Governance at the Human Technology Institute, which he co-founded and leads with Prof Nicholas Davis and Prof Sally Cripps. He leads a number of major initiatives to promote human-centred artificial intelligence, including in the areas of digital government, the future of AI regulation, and facial recognition and digital identity. His areas of expertise include human rights, technology and regulation, public law and discrimination law. From 2016-2021, he was Australia's Human Rights Commissioner, where he led the Australian Human Rights Commission's work on AI & new technology; refugees and migration; human rights issues affecting LGBTIQ+ people; national security; and implementing the Optional Protocol to the Convention Against Torture. He is a Fellow of the Australian Academy of Law, a member of the World Economic Forum’s Global Future Council on Human Rights and the Fourth Industrial Revolution, and of the NSW Government AI Review Committee. He is a patron of the Refugee Advice + Casework Service, and serves on a number of boards and committees. In 2009, he was presented with an Australian Leadership Award, and in 2017 he was recognised as a Young Global Leader by the World Economic Forum. He previously served as chief executive of the Public Interest Advocacy Centre and was a Senior Lecturer at UNSW Law School, a research director at the Gilbert + Tobin Centre of Public Law and a solicitor in private practice.Ìý
Chair: Daniel Ghezelbash
Deputy Director, Kaldor Centre for International Refugee Law
Daniel GhezelbashÌýis Associate Professor and Deputy Director of the Kaldor Centre for International Refugee Law and an Australian Research Council DECRA Fellow. He is the author of Refuge Lost: Asylum Law in an Interdependent World (Cambridge University Press, 2018). He has been a Visiting Fellow at the Refugee Studies Centre, Oxford University, and a Visiting Scholar at Harvard Law School, Brooklyn Law School and New York Law School. Daniel is passionate about using technology to increase access to justice and to counter systemic discrimination and bias in the legal system, and established the Kaldor Centre’s Data Lab. As a practicing refugee lawyer, he is Special Counsel at the National Justice Project, and sits on the boards of a number of not-for-profit legal centres, including Refugee Advice and Casework Services and Wallumatta Legal. Daniel regularly features and published in domestic and international media outlets on refugee, migration, access to justice and legal technology issues. In 2021, he was selected for the ABC Top 5 Humanities Media Residency.
-
2023 Conference Panel 2 Scenario, 'How will we identify people in need of protection?'
Panel 2 discussed the following scenario:
It is 2033 and the factors driving people from their homes are becoming increasingly complex and interconnected. Persecution, armed conflict, the impacts of climate change and disasters, growing food insecurity and human rights violations intersect to create mixed movements of people in search of international protection, livelihood opportunities and a more dignified existence.
In this context, the task of distinguishing those eligible for international protection from those who are not has become increasingly challenging.
Across both the Global North and South, access to robust individualised refugee status determination (RSD) procedures is the exception rather than the rule, with governments adopting a variety of measures aimed at increasing ‘efficiency’ in the face of increased protection claims and extended backlogs.
This includes an increased reliance on group determination, where applicants from certain countries with specific characteristics are granted protection, while other groups are automatically denied.
Faster decision-making is also supported by widespread use of AI and other technologies. Machine learning algorithms are used to stream and allocate applications to different procedures based on an assessment of a case’s complexity and the likelihood of success. Biometrics are used to establish identity and AI tools analysing facial micro-expressions, body language, eye movements, and voice are employed to assess the credibility of an applicant. Additional data collected through digital forensics, such as social media analysis, are factored into the decision-making process. AI systems synthesise all collected data to provide a recommendation as to whether the applicant meets the legal criteria for protection. Final decisions, however, still involve human oversight.
Meanwhile, new actors have become involved in determining protection claims. Private corporations are involved in designing and implementing RSD procedures and assistive technologies. At the same time, there has been a significant increase in the number of states with functioning asylum systems. These are the result of successful efforts to incentivise states in the Global South to establish new national asylum systems, which have been supported through the transfer of resources and technology. This has reduced the role UNHCR plays in processing asylum claims, allowing it to hand over responsibility to national governments.
-
Kaldor Centre Conference 2023
Panel Session 2,ÌýÌý'How will we identify people in need of protection?'
DANIEL GHEZELBASH (DG): Welcome back to our second panel for the day, where we'll be examining what the future may hold in terms of how we recognise refugees. I'm Daniel Ghezelbash. I'm the Deputy Director of the Kaldor Centre, and it's my great privilege to moderate this discussion alongside this very distinguished panel.
Joining me today we have online on the screen from Berlin, in the very early hours of the morning, Cathryn Costello, who is a Professor of Law at the University College Dublin and one of the world's pre-eminent scholars of international refugee law and migration law. And Cathryn’s leading not one but two big international research projects that are very closely related to the topic of the discussion today. The first one being RefMig, which looks at the different approaches around the world to how States recognise refugees, and the AFAR project, which is looking at algorithmic fairness for asylum seekers and refugees.
And on the stage, we have Shahyar Roushan, who is a senior member and the National Practice Leader on Protection in the Migration and Refugee Division of Australia's Administrative Appeals Tribunal and brings 20 years of practical experience doing the exact task that we'll be focusing on, which is recognising refugees, for the discussion today.
We also have Edward Santow, Industry Professor and Director of Policy and Governance at the Human Technology Institute, which he co-founded at UTS, and Australia's former Human Rights Commissioner. He [is] a long-time and tireless advocate for the rights of refugees, and he's also set the agenda when it comes to the regulation of AI and technology in Australia, both in his previous and current role.
And Niamh Kinchin, who is an acting Dean and Associate Professor at the University of Wollongong and one of the leading experts on the intersection of technology and migration, and in particular the role of technology and AI in refugee status determination procedures.
And you can find more detailed bios for all our speakers on the conference program.
Now let's turn to the video that will set up the scenario that's the provocation for today's discussion.
[VIDEO SCENARIO PLAYS]
DG: Before we unpack the various elements of the scenario, Shahyar, if I can hand over to you, maybe take a step back and provide some reflections on what values should guide the design of asylum systems.
SHAHYAR ROUSHAN (SR): Yeah, of course. The fundamental and the foremost value is fairness, and in the context of the work that I do, which is decision making, it's procedural fairness, and that's a set of rules and principles that have developed over time in law to ensure that decision-makers make decisions fairly.
That, of course, involves applicants having the right to present their case and challenge evidence, and more importantly, to be able to appeal adverse decisions. It also must involve debiased adjudication and for applicants to be able to present their case coherently in the sense that they understand the issues. And of course, this concept of fairness would involve many other elements that I won't get into, and I'll probably defer to my fellow panellists.
DG: Thanks so much, Shahyar. Cathryn, do you want to come in here, something to add on the values that should guide the design of asylum systems?
CATHRYN COSTELLO (CC): Sure. I mean, I think that's a beautiful summary of an understanding of fairness in this kind of adjudicatory setting. But I think there are other values that matter. In particular, you know, if we want to ensure that swift, accurate decisions are taken, which means actually avoiding appeals, and if we look at it from that point of view, I think efficiency does matter, both for claimants and for States. But I also think we need a wider lens on individual rights in asylum procedures, because often the harms that are done to applicants aren't only the harm of being treated unfairly. But often there are epistemic injustices of disbelief that aren't always captured in formal understandings of fairness from a legal point of view.
And also a lot of harms like violations of privacy and dignity. There's a very important EU case acknowledging that evidential methods of assessing, or ostensibly assessing, sexual orientation were a straightforward violation of human dignity and didn't generate any evidence of probative value. And unfortunately, a lot of the practices in asylum are like that. I mean, they're based on junk science or prejudices.
So I think we need a wider lens too on individual dignity and in particular privacy. When we think about especially, we're going to be talking about data as well. So I agree absolutely fairness, procedural fairness is a key value, but I think we also have to keep efficiency in the mix and a wider lens on individual human dignity.
DG: So I think it's a very useful grounding for us to start on and build on in terms of assessing the risks and opportunities that are posed by the scenario. And, you know, one of the problems the scenario – so the challenge that the scenario deals with is, grappling with the increased number of asylum claims and it foresees a very large role in the use of technology in the pursuit of efficiency.
And so Niamh, I can hand over to you – just so we're on the same page, if you could explain some of the various technologies and the applications that are set out in the scenario, that would be very helpful.
NIAMH KINCHIN (NK): Yeah, absolutely. Thank you. And I have to start by saying that this is very much a moving feast. It's quite difficult to keep on top of all the new technologies that are emerging. So I may of course miss some, and you may know some others as well. But I thought it would be good to speak about this technology in a continuum in relation to the process of refugee status determination, starting actually before displacement. So before displacement, what we are seeing is the increase of what we call forecasting doors that are relying on big data in a way to predict movements across international borders.
I have an example from the European Asylum Support Office. They've developed something called early warning and preparedness system. And what they do is, they draw on four sources of data. So for example, Frontex data – a lot of you know Frontex, the Border Agency in Europe – Google Trends, their own information. And then try to predict when the next movement will be using indicators such as, you know, the likelihood of war or conflict or even a pandemic, and then providing those stats.
So that's even before we get to actually coming into the registration process. And I should say broadly also, the surveillance tech that's being used in places like the Mediterranean, you know, so drones et cetera, are also an important part of this story.
When we come to the registration process, so this is the pre refugee status determination process, one interesting thing that we'll be talking about more is streaming and triaging ,and questioning whether there's actually automatic streaming happening in the asylum context. As far as I can see, I'm not aware of anything that's happening automatically, but it is happening in the migration space, so we have to be careful about watching what's happening there.
The UK, a couple of years ago, piloted a program which was like a traffic-light system, and it was like for their visitor visas, so, red-light stop, orange maybe, green go. And one of the indicators for the traffic-light system was nationality. So it's important to watch that kind of thing.
In relation to registration, of course, this is where we really look at biometrics. UNHCR have been using biometrics for registration for asylum seekers since 2002. Since 2013, we’ve had the biometric identity management system, BIMS, which is where they collect biometric data – such as, of course, facial images, fingerprints, iris scans – and put them into the records as a way to recording identity. That's actually currently used really widely. I think it's – as last I looked, and it could even be broader – is over, it's used in over 60 unique operations; the most recent countries to come on board, Chile and Panama. And UNHCR are quite, I'm not sure where they're at with this, but moving towards a digital ID is something that they've certainly been discussing.
Also the use of apps for registering before you go across the border. And here I'm particularly looking at the United States. They have something called the CP1 app, which, before asylum seekers cross the southern border, they're supposed to register on the app. So that's something that's happening there.
Moving from registration into the refugee status determination space, there's a few things going on, first of all here. So of course, this is the part where we have the decision-making for refugees. An important part of this, and what our scenario contemplates, is in relation to credibility assessment, interviews, that process. So I can start with that: how is or is AI being used in the interview process? In the sense of, we have algorithms that are about emotion recognition, and they are being used in other circumstances, such as like HR recruitment, we're seeing automated interviewing.
It's not clear to me that it's being used yet, like in the State, pure AI in an asylum interview process. We did see a rise in virtual interviewing across the pandemic, but where I have seen this, it's really interesting. Some of you might be aware of a very controversial project a few years ago called iBorderCtrl, which was in Europe and set up sort of like an avatar system which judged a whole lot of indicators as to whether someone could cross a border. Biometric verification, document authentication, risk assessment, but also deception detection. So using algorithms as a way to actually determine whether someone is lying, i.e. credibility, which are not the same thing at all. So there are things happening that we have to watch.
Speech and language seems to be something that's increasingly being used in credibility assessment, being very much driven by Germany in particular, but other European States as well. So what we see there, name transliteration – so when taking names that are not in the Latin alphabet and translating them into the Latin alphabet. Obviously to understand the name, but there is also a drive or a motivation there to identify where that person came from, as in what region or area, according to that process.
There is also speech and dialect recognition in asylum procedures. Again, since 2017 Germany has started testing a tool for that. So the idea is that I'm speaking, say I'm speaking to you now and you might have an idea of where I came from maybe in Sydney et cetera. That's sort of what the tool is doing. And Turkey also did a pilot of this as well. We also see Italy using speech-to-text technology where they're automatically translating the interview in the credibility assessment to text and then synthesising that with the audio and visual recording of the interview.
Mobile-phone data and analysis is something that – again Germany is leading the way on this, but other places as well – where the contents of digital devices are, particularly the metadata, is being reviewed in order to provide a report for the purpose of credibility assessment. So in Germany, where someone doesn't have a passport or refuses to hand over, say, documents, then if they don't hand them over, then it can have a negative impact, I understand, on the asylum application. Now what they're looking for in that data are things like country codes of contacts, languages that were used in the incoming and outcoming messages, country endings of the browsing history, login names for social media and of course geodata, which is, shouldn't be something that factors into credibility RSD but is – like the journey is going to be factoring in.
Now I should mention in Australia, in the Customs Act, in in the Migration Act, the ABF, Australian Border Force, does have the ability or the power to seize your digital devices and inspect them and take copies of them without a warrant. And so in the migration space, I believe that that is used where a person is going to have their visa cancelled, but that power is there. So whether that again is moved into the asylum context, we may see.
And fraud detection, this can apply to digital documents but also paper documents. That's being used quite expansively in the Netherlands, so using algorithms to detect possible fraud in identity documents. I don't know that that's being used in asylum yet, but I know that there are a lot of issues that I've talked to about risks in relation to documentation.
And then we come to automated decision-making itself. So this is when we're talking about, does the algorithm play a part in actually making decisions about asylum and could the algorithm? [...] play a part in that? I think the Migration Act was mentioned in the last panel. There is the power to do this in the Migration Act. It has been there for some time, as in, there's a power for a machine to make a decision. There's a lot of questions around that, but there's certainly a power to make that. As I understand it – and if anyone knows this differently I'd really like to hear – but as I understand it, the migration, well the agency, the department, is only using automated decisions at the moment for auto grants. So where there is a positive outcome and where it's a non complicated case. But that doesn't mean that it can't expand into asylum and more vulnerable situations.
So yeah, I think the other one that I really wanted to mention is, the Netherlands are trialling a system called, it's essentially a case-matcher system. So this is a tool that enables caseworkers to find out about applications made on similar grounds by making a search on all the cases. So it's actually, it's essentially text mining, text analysis. And then ranks the cases and documents and then scores them, I believe. So it's based on – So what I wanted to say is, it's not a machine-learning algorithm. So machine-learning would actually be training itself on, it would be trained, the algorithm would be trained, on previous data to then identify patterns and correlations, where this is more just a text matching.
That said, Canada a number of years ago was trialling a system – and again, I'd love to hear if anyone knows about where Canada's up to with this, because they were certainly very interested in looking at predictive analytics for certain parts of migration decision-making, including asylum. And we're all looking at them, but I'm not sure whatever information or where they're at with that, so it's something I feel I need to have a look at.
Yeah. So that's probably the main ones that I would unpack for now.
DG: Thank you, Niamh. And that's a really good overview of where we're at now. But taking the scenario and the sort of expansion, widespread expansion of these technologies, what risks do they pose in that context?
NK: Yeah. So I mean this is a big discussion. So I'll do my best to bring this into, synthesise it as much as possible.ÌýÌý
Increased border surveillance. So forecasting, if you get forecasting and you decide, you try to work out when people are going to cross the border, then those borders are going to be surveilled more. So that's a pretty clear one.
Discrimination. There's the risk of discrimination in a number and inequity and in number of these technologies. So first of all, if we do go to automatic streaming and triaging, I know this is going to be talking about, but you know, I mentioned before that in the UK visa system, one of the indicators was nationality. So if that then becomes indicators of nationality, then we are probably going to see some discrimination in that process.
Credibility interviews. This is a big one, whether AI has a space, has a place in this or not. But there is certainly risk of discrimination and that the algorithm will maintain and amplify inequity and discrimination that's already there based on race, ethnicity and gender. So we've already seen these issues around facial recognition et cetera in other areas. And so I can't imagine, and assumptions as well, that that wouldn't translate in some way.
Also, the thing is, I don't know if despite [...] our emotion-recognition algorithms, I'm not sure how well algorithms can detect trauma. And trauma impacts the way memory, impacts memory, and then presentation of that memory. So there's a big gap there.
In relation to automated decision-making, this totally depends on where the algorithm would be like, you know, if it's just a very small part, if it's all of it. So I'll say that first. But there's a couple of issues here. First of all, there's what we would call a feedback-loop problem. So this is where you've got previous outputs are used as inputs. And so then where previous outputs were based on bias and discrimination, they're then used to inform the algorithm, which then embeds and entrenches that bias and discrimination. And they could be on factors such as nationality, but also on the other nexus grounds for refugee status, such as religion, et cetera.
Another one that I'm really interested with and I've been thinking a lot about, is how algorithms will deal with the issue of well-founded fear. So we know that in order to be determined as a refugee, have a refugee status determination, there needs to prove that there's a well-founded fear of [persecution]. Well-founded fear has two elements. One is an objective fear, and one is a subjective fear. So in actual fact we could talk about maybe AI and tech is going to, maybe, have some really interesting benefits for objective fear, especially in the way we might bring in and synthesise country of origin information and keep that up to date. But how is it going to deal with subjective fear when it's being trained on somebody else's fear? And so that's a big issue that I see. I don't know that it's going to get across, and then we end up going back to credibility assessment interviews.
Privacy and data abuse. I think Cathryn talked about this [...] it's an obvious one and a really important one.
Biometrics, data security is a real issue. Could lead to [refoulement], you know, keeping records to help send people back, and we have seen that at least discussed in the Bangladeshi context. If national data legislation is not robust then it could land in the hands of the persecutor, and of course mobile data violations of privacy.
Just a couple more things. One, all of this, I think, might go to increasing the burden of proof back onto the refugee. So, if we have AI in emotion recognition, at what point does that become a lie detector rather than credibility? [... So then] the refugee asylum seeker has to prove they're not lying. That's a burden that wasn't there.
Data forensics will increase the need for the documentation and the data in the first place, and we know that there are so many reasons why refugees, asylum seekers, do not carry those sort of documents. And just technical issues, of course, like that CP1 app that I mentioned before. It's got lots of glitches. Speech and dialect recognition, I don't think you could ever get 100% right. Biometrics does suffer from sample degradation and we talked about analysing social media as part of the credibility in the scenario, as part of the credibility assessment. I mean, that's really difficult. Highly informal language, may include technical and social slang. I think there's a whole discussion about how life on social media might not actually be real. So that's, how do we actually deal with that?
And very broadly, there's a big risk that the decision-makers will just trust the machine, you know, that computer says no. I think that maybe the human decision-maker will trust the machine more than they trust themself. And broadly, then, moving this tech, then creating a move towards more collective refugee status determination where it's not most advantageous. But I feel like I've spoken a lot. So I think I'll leave it there. Thank you.
DG: Thanks so much Niamh. I mean some really, really important points to bear in mind there. I think that the final one about, so that the human in the loop, which is usually the excuse that sort of governments use right now to say, don't worry, we use all these AI systems but there's a human making the final decision – and there's a deliberate choice to go with that in the scenario we have today – but of course the danger is, Niamh, as you say, is that they just become a rubber-stamp for whatever the algorithm is doing and then not putting the algorithms to the scrutiny they need to be put under because of that.
Ed, do you [want to] jump in here?
EDWARD SANTOW (ES): I mean, this is precisely one of the main problems that was called out in the Robodebt Royal Commission report. You know, very grateful for the excellent work done by Darren O'Donovan in Melbourne, who pointed out that Centrelink, so you'll call it colloquially, had never employed more people than when they were running robodebt. In fact, very few debt notices ever went out without first a human sort of nominally looking over them first. But the problem was, there was, it was really sort of a just-for-the-cameras exercise. Those humans who are responsible for those debt notices were not able to go and check the information to determine whether the individuals in question actually owed money. And then when they were, you know, receiving calls from the public saying hold on, I've just got this debt notice for 5000 bucks, I don't owe $5000, all they could do was go, no, no, the computer says that you owe $5000. So you can have the veneer of a system that has a human in the loop without the reality of it.
DG: Shahyar, I'm going to give a hand over to you with a very difficult proposition, which is that we've talked about all the risks, but can you make us a bit more optimistic about what the future holds in this regard by talking about some of the opportunities that tech provides in the context of asylum determinations?
SR: Yeah, Daniel, fortunately I'm an optimist, so this was fine. Look, I can certainly see the appeal of those technologies you mentioned in the context of primary decision making, particularly border control. It became difficult for me; I was looking through these technologies as they develop to see how they may apply in terms of merits review and judicial review. But when I allow, I guess, my imagination to go wild, I do get excited about the potentials that these technologies do pose.
And just one thing I wanted to do before doing that, is I wanted to draw a distinction between automated decision-making and AI. And the automated decision-making that was used in Robodebt was a very crude tool based on very basic rules that resulted in such a catastrophe. But since then, AI should not be confused with automated decision making necessarily. Although the latter is a component of it, it has developed a great deal, and so the potentials are quite wide and may be positive.
So in terms of, again, the work that I do, I want to talk about three potentials. One is efficiency through data analysis and provision of up-to-date information; secondly, research and training; and, thirdly, consistency in decision-making.
So in terms of data analysis, I think there is huge potential to assist decision-makers in looking through volumes of information that's presented by applicants to organise them and classify them. This is not necessarily new, but how it's going to be done through more sophisticated AI tools can have profound implications for the way that tribunals and courts do their work.
But one aspect of it that is particularly, I guess, striking for me is identifying cases that need to be signposted for prioritisation, for allocation when you have a massive backlog. For example, how do you identify applicants who have severe mental illness? How do you identify vulnerable women? At the moment, for example, if we want to rely on the antiquated case-management tools that we have at our disposal, this task can be done, and in fact is being done. But it takes a long time. And so I can see huge potentials for us to be able to use AI to be able to prioritise cases that need to be prioritised.
In terms of provision of up-to-date information, again, you can just imagine how important it would be for both decision makers and representatives to have access to up-to-date country of origin information. Imagine that you'll be able to ask questions and receive real time, reliable answers. This will save time and it will work, hopefully, to the benefit of asylum seekers in most cases if that country information is favourable.
But the area that I'm excited about is really training, because it's something that I recently turned my mind to, and this is training through use of specific technologies and in particular virtual-reality technologies. This is by creating simulated environments. And so primarily, for example, this can be used to train decision-makers for interviews and hearings so that they can be better equipped to be able to identify vulnerable applicants and to be able to deal with them. At the moment, decision-makers, [... in] a tribunal, they're provided with training. But whether these traditional forms of trainings are going to be enough to equip decision-makers to deal with this kind of situations remains to be seen. Whether AI is going to actually assist us to do this better, that's something I really look forward to.
But one aspect of VR technology that I was particularly excited about was, what if VR can assist decision makers to become more empathetic? What if we can teach empathy to decision-makers? And that's by creating virtual realities where real situations are simulated so that the decision-maker can put themselves, for training purposes, in the shoes of another, experiencing different things. What could that mean in terms of debiasing? Kahneman in his book, Noise, which I'll come to, identifies openly active-minded thinking as probably the identifier of better decision-making, because it's the type of thinking that acknowledges ignorance and it confronts overconfidence in the sense that decision-makers are able to identify what it is that they do not know, what it is that they're ignorant about. What if through simulated environments, decision makers are able to have first-hand experience by exposing themselves to real-life situations?
We can go further than that, you know, putting yourself in the position of someone else. In a, I think, influential study by psychologist, Barbara Maister, it actually appeared in an article called Changing Bodies Changes Minds. What Lara Maister and sorry, Lara Maister, not Barbara, and her colleagues did, they designed experiments in which participants, using VR technologies, they were able to experience embodying different gender, age, race, sexuality. And they found that as a result of this, at least immediately, this resulted in debiasing towards the out-group. Now what if this can be simulated for decision-makers? How profound would this impact, would the impact be?
Similar experiments have been done in in Stanford and one that really piqued my interest was an experiment that was run allowing people to put themselves through again, virtual-reality technology, in the shoes of a young girl in a refugee camp. As a result of this experiment, according to the UN, this doubled the donations that were made to the particular refugee fund. So if that kind of interaction, if you're able to actually create that kind of an emotional impact by creating these virtual experiences, then you can imagine what the consequences can be; not just teaching empathy but debiasing as well.
Now thirdly, consistency, which is of particular interest to me. I think it, you know, it also goes to one of those fundamental values of, you know, a refugee system. It's equality before the law. We have to anticipate that people in similar situations or people with similar cases should receive similar outcomes. I mean, equally different cases should be treated differently. Now how can we do that? And I think AI can really assist us with being able to identify where inconsistencies occur and to at least provide some remedies for both courts and tribunals to address inconsistency in decision-making.
I was listening to a talk by Daniel Kahneman. This predated the publication of the book, Noise. And for those of you who may not be familiar, Noise is essentially a reference to variation in decision-making by decision-makers in the same organisation making similar type decisions. And he was saying that, you know, I'm responsible -–this is Kahneman saying that I'm responsible – perhaps identifying bias as the biggest source of error, when in fact noise is the biggest source of error in judgements. And he referred to it as being poisonous.
And he made this radical suggestion of, you know, if you in fact replace humans with algorithms, then you will probably get rid of noise. And this is because of the discipline and regularity that algorithm bring to decision-making. That's probably a bit too extreme. But I think, you know, in a practical sense, you actually have today AI tools being used to address inconsistency in judicial decision-making. And I think, in particular, Daniel Chen and Manuel Ramos Maqueda refer to how AI tools are currently being used for this purpose.
The data and evidence for justice reform de jure, for example, is being used to allow harnessing a body of decisions that's been made by a judge to identify where inconsistencies particularly – again using Kahneman’s terms – where occasion and pattern noise, where judges show idiosyncrasies or inconsistencies with their own judgement occur. So that then, through feedback, this inconsistency can be addressed and then used for training purposes.
So these areas really excite me, at least as far as merits review and judicial review are concerned. So I think there's great potential for AI there.
DG: Thank you, Shahyar, for that glimmer of hope and optimism and also the responsible use and application of tech in the space. But to that end, Ed, you know how do we – what steps do we need to take to ensure accountability for these new technologies and automated decision making in the context of asylum decision making or more broadly in society?
ES: Yeah. Someone once accused me of being the Grim Reaper of artificial intelligence. I'm certainly not the optimist. I'd like to think of myself as taking a balanced perspective. But maybe I'll make a couple of observations. The problem with humans – and I say that acknowledging that some of my best friends are humans, and indeed I am one too – but the problem with humans is, the way in which we make decisions can be susceptible to all kinds of biases and irrationalities. We know that, right? I don't need to explain that to you, you all know that.
So that in turn makes us really vulnerable to someone perhaps wearing a turtleneck who says to you, you know what, we've got this amazing technology solution that will debias the way in which we make decisions, and it could be any category of decisions you like. We're vulnerable to that because we have an appropriately humble sense of our own fallibility. We know how we humans can fall into error. And so when someone offers that solution with, we kind of cling to it, you know, grim death. The problem, though, is we don't necessarily subject it to the level of scrutiny that we need to, because it's got to be better because it's not us, right?
And so I just want to zero in on the use of AI in decision-making. And by that I acknowledge some of the definitional points made to my left, to my right. But I particularly want to pick up the point that Niamh made which is, you know, that machine-learning is really, probably the most important hallmark of artificial intelligence. So most forms of AI-based decision-making start with a kind of pool of previous decisions that you train your machine on. Because of course the problem with AI is that it doesn't start off as intelligent. You need to train it. And in doing that, what's incredibly difficult to move away from is that you are ingesting into the machine a whole series of previous sort of human frailties, those biases, those prejudices and so on that we're all so concerned about.
And so the difficulty, I guess, in kind of putting too much optimistic hope in AI as a solution to some of the frailties of human decision-making is that the two things are intimately connected. There's an unbreakable kind of connection between those two things. I mean, you're tethered to those previous human decisions. And so you can do all kinds of things to contort yourself, to kind of decouple the machine from the human, but they're incredibly difficult and they come with their own risks. So my very much less optimistic view is, we've got to be really careful about, you know, jumping from one problematic form of decision-making to a different problematic form of decision-making.
So, going to the core of your question: so how do you bring about accountability? So, one of the people who's been most influential on me in my journey in AI the last few years is one of the most prominent experts in artificial intelligence, a guy called Stuart Russell. He's such a doyen of AI that his most famous book is simply called, Artificial Intelligence. He's that kind of guy, right? Like, you know, he's got that level of credibility. He gave the Reith Lectures a couple of years ago, which is like the equivalent of our Boyer Lecture, in the UK, and he articulated three new laws for AI. One of them I really want to zero in on, because it was truly shocking for his colleagues who, you know, on the whole, they've dedicated their entire professional careers to AI.
So, the law that he proposed is: Assume AI fails. Assume that it will fail every time. I think that's really interesting. Not because AI will always fail. Of course it wouldn't, otherwise, if it did, then we would be a lot less excited about it than we all are, than many of us are. But rather, if you build a decision-making system on the assumption not of infallibility but rather of the knowledge that it will make mistakes, then you are essentially building in a really, really powerful mechanism to ensure accountability. To ensure that where the decision-making system starts to go off the rails, you'll get weak signals that you'll be able to interpret and make sense of, and you'll be able to ensure both individualised justice as well as putting the system itself back on the rails. That's precisely what we did not do with Robodebt. I don't mean to harp on about Robodebt. But with Robodebt, we had the exact opposite assumption. We assumed that the system was completely fail-safe and we also assumed that any individual that claimed, you know, that the system was producing an error was a liar, was a fraud.
Now if we keep on making those assumptions, then we are almost certainly going to create less accountable systems, systems that are more prone to injustice and error.
DG: And now, what role does regulation play in this space? And should it be like general regulation of AI, automated decision-making or specific regulation in the context of migration?
ES: Yeah, on the whole, I would say the general principle is neither of those things. What's really important to acknowledge is that there are a whole bunch of, again sort of skivvy-wearing tech types that are going around at the moment, have been for the last few years, saying we really need regulation for artificial intelligence. We really, really need it. The truth is we have a whole bunch of laws out – the vast majority of laws that we have are technology-neutral, which means that they apply to all technologies and none.
And so for example, if you are seeking to make a decision, and you're a human and you happen to be making an RSD determination and you happen to be prejudiced against people from Country X, well that is clearly in breach of our existing anti-discrimination legislation. If you had an algorithm that had the same effect – and that's not a hypothetical problem, right? That's a real problem – then we suddenly start to question ourselves and we go, oh, is that an ethical problem? No, no, not at all. It is just as unlawful. But it's only treated as such if we make sure that we enforce the law in respect of those algorithmic or AI-informed decisions.
So frankly, the very first, most important thing we could do to ensure that AI doesn't lead to greater levels of injustice, is to enforce the bloody law that we have right now. Right? To just do it really effectively. And to resist the kind of siren song of people, you know, who may be well-meaning but often I think are probably being a little bit disingenuous, saying we really need to have a very, very long debate about what our regulation for AI should look like. Because for every moment that we're having those very, very long debates, essentially that allows companies, governments to act as if there's no law at all. And that's just not true.
DG: Thanks, Ed. And we have a question from the audience that goes, that touches on some of the points you're making there. And so the question is, will judicial review of migration decisions in future include arguments about flaws in coding algorithms and inherent biases and other issues with AI tech?
ES: Yes, is the short answer. I mean, there's a really interesting decision about 18 months ago, not in the refugee space. It was Rod Sims, when he was the head of the ACCC, took a case against Trivago, which is a travel website. Some of you may be aware of this story, but I'll just quickly run through with you because I think it's really instructive. It's a globally significant test case but happens to have been here in Australia.
So in that case, Trivago, I'm sure you've all had this experience. You're going on a holiday somewhere. You put in your details and it'll tell you, okay, when you go to Coffs Harbour, here are the top 10 hotels based on what you need. What the ACCC got the impression, well got a tip-off, about was that, no, they were ranked on the basis of the size of the commission that Trivago got. Trivago claimed, no, no, couldn't possibly be that. No, definitely, definitely not. And also we're never going to show you our algorithm or our training data or anything like that, because that's our secret sauce. You know, we're really worried that the ACCC might leak that information or set up its own travel– I don't know what they're worried about. Some crazy stuff, right?
And the ACCC was really clear. And they go, no, we're the regulator. We'll take that information, thanks. And they fought it really, really hard. They fought it all the way to the full bench of the Federal Court. And I mean, it was, you know, I think, really obvious. It was like... it was the judicial equivalent of saying, this is a stupid kind of argument, but they put it more nicely than that.
And then, lo and behold, the ACCC was suddenly the dog that caught the car. So they got all of this incredibly detailed information, it's like the equivalent of getting like pages and pages of ones and zeros. And then they go, oh God, what do we do with this? Right? We've got the algorithms, plural, and we've got all this training data, but how do we make sense of it? And that was a really, really difficult problem, right?
So the next step was interesting because they didn't have at a team of technical experts who are employed by the ACCC who could actually make sense of it and determine whether the law was being breached or not. So instead they had to contract with Quantium, which is Australia's largest sort of, you know, AI company gun-for-hire. And working with them, they were able to determine precisely what happened. And, you know, just to ruin the story for you, tuns out that Trivago was indeed kind of, you know, acting in their own commercial interests rather than the interests of their customers.
But the point from that is, I guess, twofold. One is we now have a really clear judgment from the full bench of the Federal Court that a decision-maker can't keep that information from the regulator or from a Court. Two, if you get that information, you've still got another challenge, right? So one can imagine if you are an individual, as is the case with refugee decision-making, the idea of being able to, you know, really get the expert evidence that you need to be able to interrogate that data is incredibly difficult to imagine being realistic.
So if there is an area of reform that's really important, it is perhaps an amendment to the Acts Interpretation Act to say, look, you know if you've got a statutory right to reasons, you can't just give someone this massive pile of virtual paper. You actually have to give them the means to be able to undertake a technical assessment.
DG: Cathryn, can I turn to you for your views on the regulation point?
CC: Sure. Thanks, Dan. I wanted to just pick up that idea that we have a lot of law in this field. And I think we can see that, because if you take a wider view, a lot of the use of these technologies – in particular around algorithmic decision-making, not just in migration and asylum but more widely – have been successfully contested using domestic administrative law principles.
Privacy for example, in the case of mobile-phone data extraction, that's been condemned by courts in the UK and in Germany, but other aspects of administrative law also. And it's something in the AFAR project where we're going to try to map and actually develop a tool, a database, to have an overview of these kind of contestations.
Obviously there is specific legislation on AI as a product being drafted as we speak at the EU level and it would categorise AI, certain AI uses in migration and asylum as high-risk. But I think more salient are general procedural rules for the asylum sphere and also, as I mentioned, general administrative law principles.
But I wanted to just maybe step back a little bit, because I think what we haven't maybe mentioned is that unlike say in criminal justice, in the asylum context, we do have, you know, a fundamental lack of settled principles for evidential assessment in general. And I've been really convinced by Hilary Evans Cameron’s work in this field about our lack of stable risk assessment principles. And I think no amount of automation can overcome bigger kind of questions about, how do we assess evidence in this field?
And the second big caveat I would make is that in reality I think a lot of the reasons why asylum decision-making is really challenging is because refugees have generally been illegalised, especially if you're looking at European systems. So the reason people have destroyed their documents is because they've taken the advice of smugglers. The reason people, you know, are often completely depleted is because they've made long, dangerous journeys and they're extra traumatised because of that. And I think we have to bear that in mind, you know, that a lot of the time the idea of building this very complex machinery to assess asylum claims just seems – in some ways, I'm very uncomfortable with this idea, because of the fact that, you know, we're dealing with people that have been traumatised by these containment practices. And now we're going to double down on tech rather than actually saying, well, look, how could you make an asylum system where people are less traumatised in the first place?
I do think there are often individual positive tools that can be designed. So I think preference-matching is potentially really great for allocating asylum seekers and refugees to places where they would thrive, or even have agency in choosing. I think the judicial nudges sometimes could be very usefully designed so that judges and decision-makers are aware of patterns of decision-making. But the reality is we don't really see that much investment in tools that would be positive for decision-makers and for applicants because a lot of this technology is about, you know, confirming suspicions that people are not the nationality they claim or really doubling down on, you know, some of the more exclusionary practices.
And technology, these tools they're often at this stage, I think, especially in asylum, just being designed to confirm suspicions. So, you know, at a certain point in time, European decision-makers were accepting more Syrian applicants. And then all of a sudden there's this real concern, oh, okay now because we're accepting more Syrian applicants, we have more people pretending to be Syrian, so we have to do dialect analysis. Or we think some nefarious people have gotten hold of Syrian passport machines, so now we have to double down on checking the authenticity of documents. So they're often, you know, devised in this very reactive mode and they don't– ÌýÌýNobody's really stepping back and saying, well, look, why is it that asylum decision-making has become so complex?
So yeah, I think we have to be attentive to those concerns as well.
DG: Thank you, Cathryn. That's a great segue, a nudge, for me to move on from the tech component of this scenario. I think I've shown my own human biases by focusing on one of the areas I was most interested in. But the scenario does contain other elements, particularly focusing on other modalities beyond individual RSD for identifying people who need a protection.
And so Cathryn, I'll just stick with you and ask you about this greater role for group determination in this scenario. You know, it's something that's happening in some contexts already. And what the benefits you think of that approach are, in what context the use of good determinations are appropriate?
CC: Well, in our work in the RefMig project on this, I've always been influenced by Jean-François Durieux’s take, which is to highlight that there's always an element of group assessment in all asylum determination because, you know, persecution is on shared grounds or the risk is shared by other applicants, and we're generalising based on what we know about country-of-origin conditions.
But if we think more formally about group-based mechanisms, you know, they're pervasive. I would say most people who benefit from international protection probably do so on the basis of some sort of group-based practice, especially if we take into account prima facie declarations in Africa. But it's really not one or the other. So even in African States, mostly what you find is certain populations will have benefited from a prima facie declaration and an individualised process often exists to decide if the person belongs to that group, and you'll have some kind of individualised process for other applicants. So, it's a very sort of patchy practice, and it's a tool that's used by States.
I think if we look more widely, let's say the practices in Latin America that have arisen out of Venezuelan displacement, there you do get very generalised group-based statuses, but often more ad hoc or once-off statuses. Sometimes with, you know, very significant rights attaching to them, for example, the status which Venezuelan displaced persons have, enjoy in Colombia, which is a 10-year residence permit with the right to work.
And of course the biggest one of contemporary significance is the formal temporary protection for Ukrainian, for people who fled Ukraine after the Russian invasion. And there I think that was, you know, based on an EU legal instrument, as probably most people are aware, that was adopted after the Balkan wars but never activated since it was adopted. Indeed, the EU had shelved this legal instrument for abolition, saying it was completely unrealistic to envisage it would ever be used, just months before it was, which is a nice example to bear in mind when we are scenario planning.
And I think there, you know, you get all the advantages. You know, everybody who flees enjoys protection, but it's temporary, and in the case of the Temporary Protection Directive, extremely temporary. And of course then the determination of whether that status is going to continue is one that's made at the highest political level at the EU, which makes it very uncomfortable from an individual rights point of view.
So I think when we've tried to map these practices what we've tried to do is be attentive to who decides. Often group-based determinations involve high-level political determinations rather than judicialised or, you know, formal adjudication. And sometimes they come with some rights restrictions or fewer rights than formal refugee status – that's obviously of concern, but not always.
And I think there, I think, on the rights side of it, on the quality of the status question, I think we always have to be careful about what are we comparing with? Are we comparing with the ideal in the Refugee Convention plus international human rights law, which refugees don't enjoy in reality in many, I would say, most, States in the world. In which case, you would say often these kind of alternative statuses fall short. Or are we comparing with the reality for even recognised Convention refugees, which is often rights restriction as well?
But I think those practices are out there, and what they show is that [...] it’s both the fact that these kind of practices are, I think, important just to understand them empirically, because they're there. And that was the motivation in the beginning in the RefMig project. But also they tell us something very important about refugeehood and that it doesn't have to only be determined on an individualised basis. Because often in both doctrinal refugee law and in bureaucratic practice, the emphasis on the individual isn't rights-protective. Actually, it's sometimes about singling out an exceptionality and kind of a mode of exclusion. So I think that was the other reason that I was always interested in these practices more normatively.
DG: Thanks Cathryn. And then sticking on the different modalities, there was also a big focus on kind of streaming into different procedures in the scenario. So whether that's into the accelerated procedures or fast-track procedures – and obviously that's something that's happening to a certain degree already – I just wanted your views on the circumstances where such streaming is appropriate and what safeguards need to be in place.
CC: I think, very crudely, streaming strong decisions into fast tracks for swift, positive determination seems to be a fairly easy win-win where it would benefit States and refugees. And so from, that seems, I mean maybe there's some concerns about exclusion one would want to be mindful of, but in general, I think fast-tracking, strong, apparently, you know, evidently strong claims, if we can figure that out, well, seems like a win-win.
Obviously most of the practices on the other side of the coin. Most of the practices we see are about streamlining claims and based on presumptions about a putative safe country of origin or more commonly some kind of mish mash of admissibility and unfoundedness concerns. So what we find there, and this is what Europe has been doing for decades now, tends to be both really suspect from a procedural fairness point of view and an accuracy point of view, but also very counterproductive. So I'm pretty sure at this stage that most of the ostensible procedures that are introduced to accelerate claims end up lengthening the overall asylum process because people go to get out of any fast track that's going to lead to a rejection. If they haven't a lawyer at all, or any way to get out of the process, then they will bring an appeal if they can do that.
If we look at the inadmissibility practices in Greece, for example, where virtually all claims are inadmissible, nobody is removed to Turkey anymore. So people just try to bring fresh claims over and over again unless they give up on asylum altogether. So we have really, you know, so I'm very sceptical very often, based in particular about the European experience, about the practices that we label ‘accelerated’, because they might get you to a quick rejection at first instance, but they certainly don't do anything to make the asylum system overall more efficient.
DG: Thank you, Cathryn. I mean that's very much been our experience here in Australia, too, with our so-called fast-track system of processing asylum claims which was anything but fast; it ended up being much slower and not only unfair but just much slower than the regular system.
And I think maybe tying that back to our initial discussion around the values that informed the design of asylum systems. You know, fairness and efficiency are often flagged as being in tension. But in reality, particularly in legal systems where you have the options for the robust review of decisions, any overstretch, when you try and increase efficiency by curtailing rights, it can end up backfiring and actually leading to longer delays.
We might turn to questions from the audience now. I've had a few come through already on Slido, but we can also take some questions from the audience in a moment. There's quite a few practising immigration lawyers in the audience. And I guess the general question is, you know, what does this all mean for people practising in this space?
But also the specific question that some of the audience had was, can and should refugee lawyers use AI to assist refugees in the RSD process?
ES: I'm happy to have a swing at the second question.
I mean yes, but carefully maybe. So I mean, there are some products from companies that you would recognise, like LexisNexis, that are increasingly integrating AI into case-management and so on tools. And on the whole, some of those tools are really carefully designed. We've been doing some work recently with the Victorian Legal Services Commissioner, the regulator for lawyers in Victoria, on the rise of generative AI, so things like ChatGPT. What would be one of the most common and most worrisome kind of trends is – and this is particularly common among junior lawyers, right? – you have as an open tab ChatGPT, and you just constantly ask it questions over the course of the day. That's really, really dangerous. Please don't ever do that; there will almost certainly be a whole bunch of people who will lose their practising certificates for doing that.
Just to give you two reasons why not to do it. One, we know that these applications hallucinate. They make up stuff. There was quite famously recently a New York lawyer who was very late in putting together, his written submissions for court, and so got ChatGPT to do it. It sounded very plausible, what it came up with, and it made-up a whole bunch of cases. I mean it would be helpful if those cases existed, but they happened not to.
And then the other danger – of course, there are many dangers – another danger is whenever you're inputting your client’s personal information into ChatGPT, you're handing that over to a number of companies, including Open AI and Microsoft, in a way that you shouldn't. You're not allowed to, right. So being really cautious about that I think is very important.
DG: So, we are, at the Kaldor Data Lab, we are working on this right now, precisely a tool to assist lawyers, refugee lawyers, represent their clients. Basically, what it will be – the data's already there, we’re just working on presenting it in a way that's accessible to lawyers rather than the Excel spreadsheet and pivot tables we have right now. But basically, it gives you insights into each individual judge and each individual tribunal member about how they decided to past cases based on various different characteristics. And what we're working on exploring using generative AI for this purpose is also [...] for example, for judicial review, what sort of cases, which cases the judge often cite when deciding in favour of the applicant? What cases are cited when they go the other way? And so very much kind of mimicking the proprietary judicial analytics tools that Ed mentioned.
But I guess the concern there is, if we leave the development of these tools in the hands of big companies, the risk is it further increases the access-to-justice gap. And very soon it's not going to be how much money you have to pay for a lawyer and how expensive your lawyer is, but whether your lawyer can afford these exorbitant fees to get access to these judicial analytics which give them an edge. So trying to create open-access versions of these tools, I think, is very, very important for access to justice.
SR: This is just something I came across as I was doing some reading is Daniel Chen's research in the US immigration context, where just by knowing the name of the judge and the country of nationality, his team could predict with 80% accuracy the outcome of the case. Now the proposition there was that well this could assist representatives in being better prepared for the outcome that is so easily predicted. Well, I'm not really sure if that's always possible in every jurisdiction.
One thing I just wanted to come back to Ed, between the optimist and the pessimist, we are in furious agreement in the sense that I think there is probably a danger in kind of transporting ourselves from now to a Jetsons-like era. And I certainly don't imagine decision-making completely being designated to some AI-type model to be made. And in fact the three areas where I felt that this could be of great assistance to decision-makers is through assistive technology.
And I think if we frame it in that way, that these tools are there to assist us in making better decisions, not just in in terms of asylum decisions but generally decision-making as human beings, this is where we can really capitalise on it, without really abrogating responsibility for decision making, but utilising the sophistication that these tools give us to be able to make better decisions. And I don't see why others, including representatives, can’t use these as assistive tools to be able to do them.
I was listening to a lecture by Amy Webb, who's the CEO of I think the Future Today Institute, and they published the tech trends in the next 10 years. And I think she's absolutely right that it, you know, it will become impossible for us to think on our own. And while I totally agree with you, Ed, that we have to be very, very careful in doing that, kind of, not-alone thinking on an everyday basis through ChatGPT, we also have to be aware that this technology is advancing at an amazing rate.
For example, Perplexity does the same thing as ChatGPT does, except that it actually gives you the sources where that information came from. This can be a hugely assistive tool in terms of, for example, knowing the situation in a country where you don't have any idea about what's going on.
I noticed that only recently Bing, which also uses now ChatGPT in every response that is generated, is also incorporating references. While these references are limited at the moment, I can just see that in [the] next year or in two years' time, the resourcing will become more sophisticated.
So I guess it's a fine balance, isn't it, to make sure that we're not really rejecting this kind of technology because of its dangers, but also not, as our keynote speaker said, embracing quick fixes because it's feasible for us to do so.
ES: Just 30 seconds. So safe experimentation is really good. I'm definitely not saying, you know, throw out all the machines. And I agree with you that there's a world of difference between handing over responsibility to the machine and assistive technology. But in doing that, we must remember the point that Niamh made, which is - and there's a three-dollar term for this - that term is algorithmic difference. So in theory, it's just a piece of assistive technology, it's just giving you a recommendation. But in practice, a lot of the time what people end up doing is they just defer to the machine, they don't check the workings, they don't do any of that sort of really important legwork.
And so that's the thing that is really, really important is to maintain a bright line between assistive technology and kind of automated decision-making.
DG: But I mean, this goes back to the idea of transparency and you know, knowing what technologies are being used, how they're being used, but then also data on what types of decisions they're making. Like for example, in that scenario, if you had data, clear data, on the number or the percentage of times each individual decision maker overturned whatever decision the computer told them, that’ll give you really good insights into whether they are actually bringing an open mind to the question or not.
We've got some great questions still coming in on Slido. I'll take one more from Slido and then maybe one or two from the audience. So one of the questions from Slido was about the role of private corporations. So technology is produced to the specs and demands of the buyer, ie. the government; at what point should technology businesses say no to avoid human-rights abuses?
NK: Ed I think this is probably one that you should talk to, but I mean, at the beginning [...] I don't know that there's a point. –I mean this is like: Always. So yeah, that's probably all I wanted to say. Ed, I think you can speak to this more, but it just seems like, always, from the beginning.
ES: I definitely agree with that. I mean, I think some big tech companies have drawn a few lines in the sand. Facial recognition is a really important example of that, where even companies like Meta have said no, we're pulling back from this area of technology because we're really worried about its use, especially by governments – not just, you know, the quote-unquote authoritarian governments, but liberal democratic governments – in ways that can cause really serious harm.
But to my mind, that's a really flimsy protection. You know, corporations legitimately are there to maximise their profit. So to rely, as your only line of defence, on companies sort of pulling back from offering that sort of technology as a service, I think, is really limited. So that is an area that I think is calling out for stronger regulation.
DG: Should we open up to the audience?
AUDIENCE MEMBER: Thanks. So we spent a lot of the discussion talking about technology. But technology is very much on the side of the government, and the imbalance of power between the two parties is so tremendous. What can we do to give the applicant more power in the process? Thanks.
NK: I could start. So I think this might speak to the role of the immigration lawyer here and the importance of the immigration lawyer in what they [do]. So this is obviously not directly the applicant but the lawyer. The lawyer needs to push back. I believe that the importance of the lawyer needs to push back on those sort of technologies that are increasing the burden of proof onto the applicant.
So I mentioned that before as a potential risk. That's what lawyers need to do and to remind decision-makers officials of the legal standards. Now we do have the issue of different legal standards in different jurisdictions, but there are some fundamentals – burden of proof and also the duty of shared fact finding as well; it's a duty between the official and the refugee. And so to ensure that those kinds of things are happening.
The lawyer's role in accountability and ensuring that [...] questioning costs and thinking about outcomes and impacts, especially on vulnerable humans. So yeah, the importance of the lawyer in pushing back and sort of thinking about also how to make the legal argumentation in judicial review, etcetera.
AUDIENCE MEMBER: Hi, fascinating panel. Thanks to all the speakers. Natasha from UNSW. I've got a question that relates to the scenario. So if we can look forward to 2033 and we can say okay, the nature of protection challenges will change, it will evolve as it does and AI will play a role as it will. And then we ask, who is driving the role that AI will play? And considering that we want to move – you know as researchers in refugee law, we want to move towards the values that we mentioned at the beginning, of fairness and a greater notion of human dignity. So maybe emphasising care alongside securitisation in the way that we're doing refugee status determination as an example, or looking at gender claims in an intersectional way, not just through a narrow lens of as they are now.
What would you say to researchers then– What would be the key areas for scholarship or for scholars to focus on in order to move or to help RSD to evolve using technology further towards those values?
DG: So before we jump to that, Natasha, you're taking over my chairing role because that was the final question that I was going to put to the panel.
Maybe I’ll– So Natasha's question about what we need to– The same question we had for the end of the last panel, what's the one thing we need to understand better? What we should be researching and focusing on? And what, the one intervention that we should make now in order to avoid the worst and amplify the best of that possible future we discussed today?
NK: Do you want me to start? OK, yeah. So when you asked about the one thing we have to understand better, one thing that I used to think about in this regard is that we have to understand that the tech isn't going anywhere. But I actually think we do all understand that. But I say that because I think when I first started talking about this, there was kind of a horrified, you know, a horrified reaction as to how can you bring in tech into something where there's such human vulnerability. But it's clear that the tech is here and so it's very important that we, I guess, have a sort of a clear eye about that.
But in regards to the intervention, I actually took inspiration from our keynote this morning and thinking is that we have to insist and help facilitate refugee voices here. I don't think we should be doing anything like– We can't just use that as a placard. We just have to actually say at all points there has to be refugee voice, and really tapping into the refugee strengths. [...] I don't know how you make sure that happens, but I think we have to be very mindful of that from now and stop making the [...] don't continue to make these decisions or have these discussions without refugees.
DG: Cathryn?
CC: I think in terms of understanding better, I think there's a lot more work to be done about the institutional design of asylum systems, especially sort of at a regime level. So looking at both like everything from whether people claim or register to the RSD, in the first instance, to an appeal if it exists. It doesn't exist if UNHC is the decision-maker. But we know a lot now, that the institutional design has a huge impact on outcomes.
One feature in particular which has been partly studied only in Europe but not globally is that if UNHCR is on the decision-making body – which it is in many States around the world, in the national decision-making body – recognition rates happen to be higher. And there are also some multi-panel decision-making bodies where you have civil-society members involved. There's no reason we couldn't imagine refugee status determination being done with refugee community organisations having, you know, at least a monitoring role. But I think, you know, just thinking a bit more about the institutional design – not just the legal procedures themselves, but who decides in whose name.
And maybe the second intervention would be to just to de-centre individual interviews. I mean, I think they're just a really bad– they're really problematic for applicants, especially the way that we tend to institutionalise them in, as far as I'm aware, Australia, New Zealand, certainly across Europe, in, I mean crudely in the Global North. And they often don't extract useful information. They're often, and credibility isn't, you know, we use it so often as a reflex for describing what goes on in asylum determination. But you know, we shouldn't be making all things considered determinations about a human being's credibility or not, at all. It's not pertinent to their asylum claim. So I think we really have to kind of shake it up a little bit.
Your scenario referred to, you know, people won't get a robust individualised assessment. And that made me cringe a bit, saying, well, when I think about individualised assessment in this mode, I don't think of that as robust. I think of that as re-traumatising, not a source of accurate and probative evidence very often. And so I really would want to shift the focus a bit more into, what do we know about the risks people face based on the objective evidence available, that of countries of origin? And that that can very often lead to at least a presumption of inclusion. And then people should be treated as presumptive refugees if we need to do a more individualised process.
ES: So we're at a real crossroads, which I know is a cliche, but we really are. There's big decisions that governments are making, including the Australian federal government, on the way in which they use AI and other new and emerging technology for decisions. And there's huge money flowing in, right? There's enormous incentives on corporate Australia and multinational corporations to really supercharge that change. That change is not necessarily a bad thing, I don't want to say that.
But it is unmistakable, and I have to be really clear about this, I can't think of another liberal democracy with a less engaged civil society and academic community. And that's not to say that there aren't some, you know, civil society and academic folk who are very, very engaged. And I'm sure many of them are represented here, but they’re often not given or not taking a seat at the table. And we need leading civil society and academic voices to really be stepping in on this because all government at the moment is hearing from time to time is often, well, look, just make sure you give human rights consideration in whatever you're doing over there. We actually need a bit more involvement from those academic and civil-society voices.
SR: Yeah, I'd like to continue on from Ed and really emphasise that in order to do that we really need to ask important questions. First is the context: Are we asking in what context AI is being deployed? Is it a high-risk situation? Does it apply to vulnerable people, as Niamh said?
And secondly, the question that actually perplexes me is ultimately those people who are responsible for reviewing those decisions, do they understand what has actually been involved in the type of tools and the type of mechanisms or the technology that has resulted in those decisions being made? That really frightens me when I think about the bench, for example. When would they be able to get the capacity to actually understand the technology involved so that they can engage in a proper review of decision, that may have relied on an unexplainable AI?
That's a really critical thing that I don't think we engage enough, because we are too caught up with the technology itself. But I just wanted to go back to my, I guess, message of optimism and maybe end with a quote from Orly Lobel in her excellent book which is called The Equality Machine. And, she says, “We find ourselves at a crossroads, poised on the precipice of a profound paradigm shift. Let us be inspired by all the positive potential to embrace AI to create a bigger, brighter future. Storytelling matters. If all the stories we hear about technology focus on the harm technology poses to the vulnerable, why would anyone want to enter the field to make it better?â€
Ìý