Episode 2

TITLE: The Dangerous World of Data Analysis

Interview with Dr Andrew Chen, Research Fellow

  1. INTRO: THE NEW FIELD OF DIGITAL TRACING [00:00]

 

PAULINE: Hello and welcome to Pandemics Reflected. Today, we’re talking to Dr. Andrew Chen, a former computer systems engineer and current data ethics specialist. What does this have to do with pandemics? Well, when locking down countries and stopping the spread of a deadly disease, everything. Andrew, Welcome.

ANDREW: Tēnā koutou katoa (Greetings, hello to you all in Te Reo)

PAULINE: So why don’t we start with the questions, what is it that you currently do? And what does that have to do with pandemics? Yeah.

ANDREW: So my research now sits in the area of digital technology and ethics, particularly public sector use of those technologies. And so for a bit of a flavour of the other things that I’ve been working on, there’s been projects of facial recognition use by police, the algorithm charter, which governs how  algorithms are being used in the public sector and also looking at how humans are contributing data to A.I. systems without necessarily their knowledge or consent. That’s a pretty wide ranging area of projects.

ANDREW: And at the beginning of the pandemic, what happened was that there was this idea that we could maybe use technology to help support contact tracing. So this new field of digital contact tracing appeared March 2020.

PAULINE: A subsequent survey of all of the existing literature before that showed that there were only six previous studies that had all that have been conducted in that area and that all of those experiments were conducted in sort of hospital settings and very limited settings. So nothing at the sort of scale that we were suddenly thinking about implementing it on, like no models to say whether or not this would work in a national or international context. And there was one particular paper that was published in Science, which really kicked it all off because that paper, that’s a modelling and claimed that if you could get people to isolate within 24 hours of becoming infected, that you could effectively suppress the pandemic or spread, suppress the spread of the disease. So they generated a lot of interest.

  1. TECHNOLOGY VS. ETHICS

ANDREW: I think a lot of technologists at the time said, Well, this seems eminently doable with the technology that we have now. That was technology that we probably would not have been able to use, say, 10 years ago. But now that everybody has a smartphone in their pocket and you know, when we say everybody, we actually mean 80 percent of the population, which is important to acknowledge that maybe that could help support the contact tracing process.

PAULINE: Great. So that sounds like really cutting edge stuff you were getting involved in. Was that scary?

ANDREW: Yeah. Well, I think there was a lot of uncertainty about how it would go. And I guess the interesting thing is that we see this trend where all the technology folks were basically saying, you know, of course, we can do this, why wouldn’t we do this? And like just to give a sense of how complicated the technology is, there were people working out solutions in an afternoon. They weren’t good solutions, but they were, you know, systems that probably would have worked to some extent. And it’s not really that sophisticated to build the technology behind it. But those technologists weren’t necessarily thinking about everything else that needed to be done.

ANDREW: So they weren’t necessarily thinking about, well, actually, not everybody has a smartphone in their pocket, and they would just assume that everybody does because of course they do and all their friends do. And, you know, they would say, Well, I whipped up this form in one afternoon, why can’t the government do it? Well, the government has to think about privacy and ethics and regulation and data storage and what it is and isn’t allowed to do with all of this information that it’s collecting. And yeah like, I think there was this mismatch of expectations in the early days where, yeah, there were all these calls and technologists being mad at the government for not acting fast enough when the government was probably moving as fast as it possibly could. And the fact that it did manage to get an app out in sort of May 2020 is no small miracle given how well government IT projects tend to go.

PAULINE: Yeah, absolutely. And I suppose on the flip side, if they’d done it any faster, there’d be in a whole lot of concern about surveillance technology and ethics on the other side.

  1. FROM SURVEILLANCE SOFTWARE TO DIGITAL CONTACT TRACING [00:04:28]

PAULINE: So you’re a tech person, or were a tech person, or kind of came from a segmented discipline. How come you were thinking about ethics? Let’s go back to the beginning. You finished a Ph.D. Is it in computer systems engineering?

ANDREW: Yeah, I did a Ph.D in computer systems engineering at the University of Auckland. So I was in the Department of Electrical, Computer and Software Engineering, and my research was in using computer vision for what we called video analytics systems, which is basically a nicer sounding term for surveillance and when I started doing that, I was thinking, Oh, well, this is a cool area to explore and have a look at. And gradually, as I was doing the project, I felt more and more uncomfortable about what it was that I was trying to do and sort of the ethical implications of that. So I did end up writing about privacy and ethics in my Ph.D. thesis as well.

ANDREW: And when I finished my PhD I ended up moving sideways into the Faculty of Arts and sort of taking the technical expertise to have a good understanding of what is and isn’t possible, but now trying to apply it towards sort of governance frameworks and public policy and those sorts of ways of managing the risk that comes with our use of digital technologies. And so because I had that direct experience with surveillance technologies and because I’ve been thinking about privacy and ethics and those sorts of things at the beginning of the pandemic, when there were these ideas about using digital contact tracing.

ANDREW: I think the first article that I wrote was basically to try and tell people that this is more complicated than people might think, that it’s not just a technology solution, that there are these privacy and ethics considerations that both the government and the public need to be thinking about, because if people don’t trust the system they’re not going to use it. And so, yeah, I kind of just naturally fell into that intersection of my skills it obviously wasn’t something that was on my research plan that I drafted in January 2020. But it felt like it was something that that I could contribute towards, particularly the conversation about how we use these technologies and whether or not we should use these technologies in this way. And so I sort of put myself out there to contribute to that conversation.

  1. A DAY IN THE LIFE OF A PANDEMIC RESEARCHER [06:46]

PAULINE: That’s great. Now that’s a big shift. I mean, you change faculties, you change disciplines. You even became this go-to person for the media, as you said, you really put yourself out there. So what was that like?

ANDREW: Yeah, it was a period of significant change. And on top of that, you know, trying to keep ourselves safe and fed and warm and all that sort of thing during the lockdown. I’m also only part time with the universities. I had the other job to contend with and we were also at that time at Koi Tū, we were part of a project to collect policy responses to COVID from around the world. We had a network of over 100 volunteers who were telling us what their governments were doing in their respective jurisdictions. And I was the technical person trying to collate all of that and get it onto a website so that people could see those policy responses.

ASIDE: Koi Tū: The Centre for Informed Futures is an independent and apolitical think tank and research centre. Andrew is not alone in having more than one academic role and project running concurrently. Academia is increasingly a “gig economy,” like the music sector, characterised by flexible, temporary, or freelance jobs. Inger Mewburn known as The Thesis Whisperer, wrote in 2017 that: “it is estimated that 60 to 80 per cent of [academic] teaching is done by casual and contingent labour”. Adrianna Kezar, in her 2019 book called it the Gig Academy. The majority of early career researchers are expected to follow contracts to different countries, juggle multiple contracts for research and teaching, and compete for grants to pay salaries. The current pandemic has had an impact, presenting further difficulties for some, but also opportunity. Later in the show we discuss the algorithm charter, and how some AI can be beneficial for example, processing immigration applications stuck in a pandemic-induced queue. We also talk about how those little pictures of traffic lights testing if you are human, could be viewed as exploiting your free labour.

ANDREW: So there was a lot going on at the time. I think, you know, if there’s one thing that being a researcher in a technology field teaches you is that you have to be able to adapt really quickly. And I think that that’s something that I found quite interesting is that there are some academic fields where things change slowly. Like, for example, ancient history is probably still there. And that’s not to say that the academic sphere can’t adapt or don’t change what they’re thinking about. But you know, if your paper comes out a year later than when you did the work, it’s probably still valid. Whereas in a technology field, if your paper comes out a year later from when you did it, it’s probably already obsolete. Somebody else has already done something better. And so that really, really encourages you to learn quickly and to adapt to new situations really quickly. So I think the skills that I developed there were quite helpful for the pandemic.

PAULINE: Speaking of the pandemic, what were some of the logistics involved in doing those? You’re basically balancing two jobs and a global pandemic and sort of being a bit of a public face there as well. You want to talk us through a few typical days in the life of… what do we call you nowadays, a data ethicist? An ethics research scientist? Is there a term like that for this new thing that you’re a part of?

ANDREW: Yeah, I don’t think there is a good title for what I do, but “technology expert” seems to be… or “technology ethics expert” seems to be, what Radio New Zealand has written down for me in their database. Before the pandemic, I was already working from home, so it wasn’t too much of a shift there. And the slight sort of misdirect is that even though my affiliation is with the University of Auckland, I live in Wellington, and so I was already working remotely, and that wasn’t too much of a shift.

ANDREW: I think most days I would wake up and not know what was going to happen that day. I would have some things that I knew I needed to do that day, but things would just change over the course of each day and there’d be things that I would try to do. But I would just be interrupted several times a day by whatever the press releases were coming out were and you know, 99 per cent of my media engagements was media calling me rather than me trying to tell them that there was something important that they should care about, and some outlets will flick you an email first and check if you’re available. But most of them will just call your phone number and you know, you may have some ability to say, “I’m busy. Can you call me back in half an hour?” But if that’s any longer than that, they’re probably, you know, not going to talk to you and they’ll go talk to somebody else because they need to file their story.

ANDREW: So, yeah, for those , particularly after the app came out, actually in sort of May onwards , I was probably dealing with on average, I think I worked out that it was on average, about three to five media queries a week. Well, but obviously some weeks were more than others. They just kind of come in, then you have to pick up the phone and answer the best you can. It’s not necessarily a lot of time for prep. The only ones that I was really able to prep for were radio interviews or TV interviews , and because they would call you the night before or in the morning to say: “Can you come in later today?” And at that point, I might write something down and actually have a think about it, particularly for the print journalists . They would sort of need that comment straight away. The flipside being that, you know, that they weren’t necessarily quote you verbatim for every word that you say so. Yeah, yeah. Anyway, that’s sort of delving too much into the media side. I think that’s the thing that sort of characterises most of the pandemic for me was just having to… be ready for those interruptions in your day.

PAULINE: Yeah, and that’s hard, because if you think about temporality and how time is just really weird and skewed during the pandemic, this ongoing pandemic and how in some ways things have to be really fast as they were (in what some friends and I call the before times), where things have to be fast in technology and media. And then kind of this… this limbo that people were sitting in this unknown. And like you say, you had no idea what each day would bring.

  1. MEDIA PROFILE AND ACADEMIC HARASSMENT [13:15]

PAULINE: I do want to go just a little bit more into the media. We’ve had some interesting conversations about that and this pandemic has really polarised people.

Ashley Bloomfield and Siouxsie Wiles were just some of the high profile public servants and academics who really faced a lot of discrimination and harassment. What were your experiences with this particularly being based where you are?

ANDREW: Yeah, I think when I started, I was just about digital contact tracing and the app. And there was, you know, a bit of tension from those who didn’t like that the government was going in this direction. And you know, there were a lot of people who didn’t understand all of the sort of privacy protecting aspects of the effort they’d already been putting in. And so they were kind of asking me why I was advocating for the use of a tool like this. And initially, it was kind of manageable, like it was mostly just other tech nerds and people with a bit of an ethical bent sort of like asking me the occasional question or so. And probably more in 2021 was when it started to turn a little bit more sour for me.

ANDREW: Firstly, there became people who accused me of being a Chinese agent trying to bring a social credit system into New Zealand, which is not true, at least to my knowledge, is not true. I should say I was born in New Zealand and my parents are Taiwanese, and I have no real allegiance to that part of the world. And then I kind of picked up a bit more when towards the end of 2021, we were looking at vaccine certificates and vaccine passes. And I was one of the few people who was really looking at the technology behind how it works, as opposed to necessarily the ethics of whether or not we should have them. And I was trying to help people understand how to get their passes and that sort of thing, because not everybody is very familiar with technology. Folks may remember when the passes were first released, there were lots of issues with too many people trying to get their passes at once and the system not quite coping and phone lines being jammed and that sort of thing. And I was just trying to help people get through that period so that they wouldn’t be too stressed out.

ANDREW: But because of that, the anti-vaxxers picked up that, you know, I was somewhat supporting the message here, and there was some chat about me in their online forums. I’m under no illusion that I’m not a very, very high profile academic at all and that there are certainly others who are more high profile than me. I never made it onto the Nuremberg list or the list of people who were going to be tried for crimes against humanity or anything like that. But funnily, well, it’s funnily, depending on your sense of humour, anti-vaxxers were calling for me, calling for me to be deported rather than executed, which, you know, on the one hand, glad that I’m not going to be executed, on the other hand, just shows the sort of latent racism that was also in those circles.

ANDREW: And so I had to take precautions like reasonable precautions at the time. I’m in Wellington, and the protests weren’t that far away from me. So basically took some advice to lockdown social media. I limited my media exposure at that time chose not to comment on things like the vaccine certificate and vaccine passes installed a security camera at home just in case… those sorts of things.

ANDREW: And, you know, ultimately nothing’s happened. You know, no one’s come and attacked me or anything like that. At the time, it was pretty worrying and concerning, and particularly for the other folks in my house, and I had to kind of just say to them, Look, don’t worry too much, you know, like, I don’t want you to stress, but you know, there’s a risk here, and we just have to take precautions like maybe locking the door during the day rather than leaving an unlocked place we had before and that sort of thing.

PAULINE: Yeah. So definitely shifting and changing times. And I think that’s a little more than a latent racism that I think we’re going full-blown xenophobia, wanting to deport someone somewhere else from their country of birth. So there’s been a lot of that, though we’ve noticed a lot of really noticeable xenophobia gender issues coming to the fore. Sexism, misogyny, a whole range of things coming up. And on one hand, it’s almost like this transparency is suddenly being revealed, perhaps for those who didn’t notice due to the blatantness of it. And on the other hand, it’s obviously worrying because is this something that’s on the rise and we’ll be speaking to various people on the show who will be debating this.

ASIDE: For anyone who wants to follow up on some of Andrew’s excellent points in writing that he’s mentioned here, there’s some pieces in the conversation, the spin off and other things will pop some of those in the show notes now.

  1. DATA ETHICS, TRUST AND THE ALGORITHM CHARTER [18:02]

PAULINE: So within your sort of academic discipline now that you’ve moved into the data ethics side, what differences have you noticed? Or have you noticed any differences other than this is an entirely new kind of field that you’re in?

ANDREW: I mean, if we look at the broader trends, I think there is a much greater awareness of these sorts of issues amongst the public. This isn’t necessarily just from the pandemic, but I reflect back and I look at this field as a whole and the sort of activists and NGOs that have been acting in this space and academics and we spent the better part of 10 plus years telling everybody that actually Facebook is evil and that, you know, harvesting all of their data and using it against you. If I look at that trend of how that information that’s flowed, it’s, you know, think pieces and people writing about it and talking about it at panels in the early 2010s. But that doesn’t really get things into the public consciousness. But then you start to get like Netflix documentaries in 2017, and everybody’s telling me that I have to watch The Social Dilemma and I’m kind of going: “I already knew all of this, but OK, good. I’m glad that you are now aware of it”.

ANDREW: And now I think we are in a position where, you know, there is general distrust of some of these big tech companies, even though we all continue to use their products right? And I guess what I am now starting to see is that that distrust is influencing people’s perceptions of government and public sector as well. And there’s different relationships there, and people have different relationships with big tech companies than they do with the government and particularly, for example, if you are Māori and you know that your whanau have been discriminated against because the government is your state or against you. There are stories of things like the government using census data to seize land in the 1800s from Māori rights. That influences your ability to trust the government, for sure. For a lot of other people who haven’t necessarily had those experiences, they’re now starting to ask the same sorts of questions.

ANDREW: If a company is able to collect all of this data about me and then use it in this way, if the government is also collecting this data, what could they do? Right? I think that there is just a bit more of that tension there. Having now worked a lot more with government and public agencies and understanding what they’re doing a lot more, like, they are doing far less than what most people believe that the government is doing. They now have this really big challenge of having to build trust with people. And I think we saw that with the COVID tracer app a bit. There were a lot of concerns early on from people that this is another sort of surveillance tool that the government is going to use to track where every person has been at all times. And the government and the Ministry of Health, it actually gone to great lengths to try and make it so that that was not possible. They actually don’t care where people have been if they’re not infected with COVID or they’re not a close contact. And so they don’t want that data, that’s actually a liability and a risk to them. But they probably took the better part of six months or nine months or that message of we have developed this app in a way where the data stays on the device and it’s not transmitted to the Ministry of Health. It took the better part of a year for that message to actually get through to the public to a point where people would trust that this was a system that was safe to use. And even today, you know, like, I still have conversations with people who have that misconception and believe that the government is holding records of every person because of the scanning of QR codes. And that’s a tough challenge to overcome that for the government right? That they have to, you know, somehow educate people about what it is that they are doing and what it is that they’re not doing.

ANDREW: Because in the past, I guess there has been this, that the default position of not telling people what they’re doing because if they don’t tell people what they’re doing, then they can’t be attacked for doing it. But the challenge with not having that transparency is that people then fill the gaps with their own ideas, their own knowledge, their own science fiction that they’ve been reading. And that’s not particularly helpful, either. I guess the broader point here is that there is a big challenge for government to build and maintain that trust with individuals. And I think we found during COVID that trust is not going to be built through one way advertising campaigns to people. Just telling people to use the app doesn’t do anything to build trust. There are much longer term relationships that have to be established and built with people.

PAULINE: Absolutely. And I mean, you can understand why some people think that because information is more readily available. And yes, there’s a lot of misinformation, as you say. And in certain circumstances, like the one we’re talking about here, every care has been taken that could be taken. But there’s there’s also I mean, if you look at what’s been going on in the States in the past few years over the pandemic, if you look at various atrocities that have happened in the past and that’s happening now, people are very suspicious about how things can be used because really, at the end of the day, it just depends. You know what country, what governs, what state, what policies are all going through. And again, there’ll be experts on that that will discuss that at length with but talking about trust. You mentioned at the beginning of this conversation about an algorithm charter that’s being developed that you’re working on. Is this a new thing due to the current pandemic? Is this something that was in the works anyway? Can you tell us a little bit more about it?

ANDREW: So the Algorithm Charter of Aotearoa New Zealand is something that was developed by Stats NZ and the government chief data steward. So it’s not my thing. That’s a government thing, and they’ve got a whole bunch of the public sector agencies to sign up to it, including the really big users of data and algorithms like MBIE (Ministry of Business) and the Department of Internal Affairs and even police. And it set some minimum principles that those agencies need to abide by when they use algorithms. So things like transparency, partnership with Māori, thinking about people who are being affected by the use of those algorithms and so on and so forth. That is, I would say, the first couple of steps in that journey. So I don’t want to say it’s the beginning because people have been doing things for a long time before that. But it is really that a first step towards attempting to maintain minimum standards around government and how they use algorithms and data. And there’s a risk framework in there as well to get agencies to reflect on their use of algorithms and to assess which ones they need to be more concerned about and which ones are sort of more ok.

ANDREW: You know, I think we can really easily fall into this trap of saying, “Well, all algorithms are bad or all use of data is bad”. But there are some really like benign good uses that can be, you know, really safe and well governed and will produce a lot of benefit for people. And we have to find that balance right of like helping government be more efficient and more effective while not introducing new harms or if we are, mitigating those harms as much as possible. And so the algorithm chatter, I think, is, you know, it’s a first step. There’s a lot more to do. I guess the role of an academic like me is in part to try and help some of these government agencies implement the algorithm charter. So I’m on a review board for MB and they’re still building up the infrastructure, but they want to have… or like medium risk or higher algorithms go through this board for advice and oversight, and also to be monitoring if public agencies are actually upholding this charter and to sort of call it out if it’s not. We want to be constructive. Right. We don’t want to be sort of shaming agencies for not doing it properly. You kind of do need that third party independent view of these things as they go through as well. And that’s kind of more what I’ve been working on is a) trying to raise awareness of the algorithm charter and get people to understand that the government is doing things in this area to try and build and establish, and build and maintain trust, but also to provide that sort of oversight and independent check.

PAULINE: When did the majority of agencies sign this charter?

ANDREW: Yeah, I think like when the charter was released in late 2020, twenty six agencies already signed up, the vast majority were already signed up and a few more sort of added on over time . And I think at this point, pretty much everyone that you would expect to be signed up is signed up. The only notable exceptions would be the SIS (NZ Security Intelligence Agency) and GCSB (Government Communications Security Bureau). And that is probably very understandable why they might not sign up to that, but I think pretty much everyone else has.

  1. IMMIGRATION, ARTIFICAL INTELLIGENCE & FREE LABOUR [26:50]

PAULINE: OK. And you mentioned some benign A.I. that that could have some really good benefits. Do you want to give us a few examples for those listeners who perhaps aren’t familiar with it?

ANDREW: Yeah, I can pick one example, which is a little bit controversial, but it is one that I kind of understand, and that’s immigration. And one of the huge challenges of immigration is manual review of all of the applications. And there are a lot of applications that come through, and it really sucks as an applicant, if you have to wait nine months a year, two years, three years to have your application reviewed. And that’s, you know, it’s not taking that period of time for any reason other than that there is a huge workload and a capacity struggle for the organisation. Right. There are lots more people who are applying, even though the levels of people coming in may not be changing that much. And COVID has made a big difference there, of course, but if we talk about 2018, 2019 numbers , that sort of application volumes were increasing and so we could use technology to identify individuals who are lower risk and whose applications actually, you know, once a human was looking at it, they’d probably go through it and say, Yes, this person can come in.

ANDREW: But that process, that application is not being processed because it’s sitting in the middle of a queue, right? And so if we can use technology to screen those applications and to pick out the ones that are lower risk, then that reduces the burden for the humans to have to review only the medium and high risk applications, and that helps you with your capacity challenge. So that’s one example of how we might be able to use technology to do that better.

ANDREW: The controversy, of course, is around how accurate is that technology? Is it going to make any mistakes? What happens if somebody who should have been automatically put through now has to go through a manual process and does that disadvantage them? There are various sort of ethical concerns that we have to work through and mitigate. But, you know, on the face of it, at a high level, I think the aim of trying to improve the applicant experience by, you know, speeding up applications where we can, is probably a good thing. And to be honest, there’s probably expected by most applicants who would say, “Hey, we should have the technology to be able to automatically process a lot of this. So, you know, why? Why aren’t we doing that right?

ANDREW: And yes, I think that that’s just one example of where technology can be used in the public sector and a manageable way.

PAULINE: And it’s a good example as well. I mean, for podcasts like this, that’s in part looking at the behind the scenes lives of people who do research around pandemics because a lot of research fellows, a lot of academics – part of the model is that they do, we work in different countries, in different places, often on temporary visas. There are a lot of PhD students and early career researchers and research fellows who are stuck in limbo with visas, not being able to be renewed or being renewed, and having serious concerns about whether they could return home, whether they could finish their studies, whether they could finish the project they were working on, whether they could contribute within these point two fellowships or within these limbo kind of positions. So certainly very relevant there.

PAULINE: Another thing that was really quite interesting is that you were talking about how humans could contribute to AI without realising, and that’s one of the newer sort of projects that you’re going to be moving into. Is that entirely related to the algorithm charter or is this also a segue?

ANDREW: Yeah, it’s a slightly different projects. We’re funded by the Transdisciplinary Ideation Fund at the University of Auckland and got quite an eclectic group of academics. We’ve got folks from the School of Music, Architecture, Business School, Computer Science, Law and me, and we also have a research assistant who is actually a philosophy Ph.D. down at Otago. And I think the purpose of funding from that fund was to try and get groups of people together from very different academic perspectives to work together on projects that would benefit from having different perspectives. And so this project that we’re working on now is around humans contributing data to AI systems, without them necessarily knowing about it.

ANDREW: And the sort of canonical case that we keep coming back to is the CAPTCHA system. So if you are using your computer and you want to go buy some tickets, for example, to a concert, chances are you’ll be presented with what is called a CAPTCHA, which is a task that is meant to ascertain whether or not you are really a human or a computer bot. Right. And once upon a time that was “Here is some type in the numbers and text that you see on the screen”. That is now evolved to “Here are some photos. Click on the boxes that have traffic lights on them or click on the boxes that have pedestrian crossings on them”. And for most people, we just click through it. We do it because we’ve been told that we have to. And we really want to buy those tickets. It’s no coincidence that those image based tasks all seem to have something to do with roads. And it’s because humans are actually labelling training data for autonomous vehicle systems By being presented with a panel of images, there are some where they already know the correct answer and some where they don’t know the correct answer. And you know, if you fail the ones where they know what the correct answer is, then they assume that you’re a bot and they prevent you from getting access and they don’t use that data.

ANDREW: But if you do answer those ones correctly than the ones where they don’t know what the answer is, they take youranswer, combine it with the answers from hundreds of other people.  And, you know, come up with waiting to say, “OK, well, this is a traffic light or not a traffic light”. And so what we’re kind of exploring there is that a person who is sort of clicking on those buttons doesn’t necessarily have informed consent, They probably don’t know that their data is going to be used to train an AI system. They’re not informed about it because most people don’t know that this is what’s happening. Then probably worryingly for us is that that value has been going to some company that has all of this data and is selling it to autonomous vehicle companies. And no value really is coming back to you as an individual.

ANDREW: And we’re sort of reconceptualizing these relationships where, you know, if we talk about Facebook, we often talk about users giving up their data and that they’re being a value transfer. In this sort of capture case, we can reconceptualize that as work or labour. That the person is actually doing work for somebody else and not being fairly compensated for it. So In the way that I think about this project, you know, we are a transdisciplinary group so everybody’s thinking about it in slightly different ways in the way that I think about it is trying to say, “If we start to talk about these sorts of relationships as being work or labour relationships Does that unlock a different area of literature and philosophy and thinking that that can help us talk about these challenges in different ways?

ANDREW: And so one example, is that you now have all of these people who are doing work for an overseas company and one person doing it by themselves may not be very much. But if you take it at a national level, you might say, Well, actually, there’s three million people who are now doing a little bit of work for this company overseas on a regular basis. Should the government be worried about there being that value transfer Is there are now, a sort of trade deficit that has been created because we are providing labour to them and they’re not necessarily providing much value back, they’re not necessarily paying tax on that back. Is that something that the government actually cares about, right? And I think, yeah, being able to talk about it at that in those sorts of ways starts to create different conversations and makes it more meaningful for people in different contexts.

ANDREW: So as a policymaker, you might say, well, you as an individual, we’re not really worried about you as an individual having to click on some boxes. That only takes you a second, whatever. But if you start to say at an aggregate level, there is actually potentially an impact on productivity and how we value our trade relationships with companies overseas, then suddenly that becomes something of interest to a policymaker. So this project is in its really early stages, and we kind of just exploring all of these different ideas It’s really, really fascinating for me as somebody who has kind of switched disciplines and a lot of the other people that we have in the group are also sort of between disciplines. They are naturally into, well, interdisciplinary themselves already. And yeah, we’re having some really fascinating conversations about it.

PAULINE: That’s a really good example as well of some of the openings that this current pandemic has presented and that there are these bigger questions being asked now about how we work and where we work and why we work and what our labour is. And why should we have to go into an office in a particular country with immigration, et cetera, et cetera? And why can’t we do things, for example, you’re based in Wellington and do it for Auckland, and I’ve done that as well. I’ve taught for Wellington, based in Auckland. I’m currently based at the University of Auckland here in this role. So there really are bigger questions about the way we do things and why we do things that pandemics and these massive sort of cultural shifts can reveal over time.

  1. FINAL THOUGHTS AND TRACK N’ TRACE [36:26]

PAULINE: Andrew, do you have any final thoughts to leave us with before we wrap up for today?

ANDREW: Yeah I had one thought about earlier on we were talking about the way that information moves and influences people and sort of trust and that sort of thing. And one of the things that I found really fascinating throughout the pandemic was the globalisation of media and the way that people in New Zealand were weirdly influenced by policy decisions in other countries. And so for a really concrete example, it was really weird to see people talk about “track and trace” in New Zealand as a term and to see “track and trace” being held on protest signs and that sort of thing. Track and trace in New Zealand is a term that is really only used by New Zealand post to help you track your parcels. But it was the term that was used in the UK for their digital contact tracing purposes, and it was really clear that there were lots of people who were being influenced by the language of the UK policy approach, rather than what we were doing here in New Zealand. And I think that that’s a really big challenge for governments like ours, where, you know, we are relatively small and the people in our country are consuming media in a global way. And I think it’s really easy for somebody to watch a video about, you know, from a late night talk show host in the US about policy that’s happening there and if you’re not really critically thinking about it to kind of just assume that things might be the same here. And yeah, I think that there is a challenge there for the population being informed and in order and in order for them to trust, they have to be informed correctly. So, yeah, that was just an observation. I don’t know how we solve it. I’m not going to pretend that I have any solutions, but that was one of the significant challenges with the digital contact tracing work. People would have preconceptions or misconceptions, that were actually from overseas and I don’t know how we fix that.

PAULINE: It’s a difficult one because I think in some cases there are those thoughts being intentionally seeded amongst particular groups to try and cause dissent, etc. And again, we’ll pick that up on of the later shows. I must say that track and trace very much amused me, especially as there have been many delays in packaging and parcels worldwide. So it’s quite something to think about, especially if you think about the idea of track and trace of people as a product, as a parcel, as a thing, a commodity which then leads back to some of the surveillance stuff you’ve been talking about.

PAULINE: Well, that was Dr Andrew Chin, the tech ethics expert whose career and research certainly pivoted massively due to the current ongoing COVID pandemic and has some really good insights into what’s coming up. Andrew, thank you for your time today.

ANDREW: Yeah, thanks very much for having me.

  1. NEXT SHOW – DR MARIA ARMOUDIAN, POLITICS AND INTERNATIONAL RELATIONS [39:35]

PAULINE: You’ve been listening to Pandemics Reflected, a series of conversations with scholars at the University of Auckland’s Pandemics Research Hub. Next week we’ll be talking to Dr Maria Armoudian, a senior lecturer in politics and international relations, the author of three books on law, media and human rights and a singer/ songwriter. We will delve into US-New Zealand relations during the COVID pandemic, modern slavery and mining, and Maria’s upcoming trip to Armenia to participate in a conference that deals with genocide and justice for indigenous people.