The Digital Deal Podcast

Truth Makers (Part II)

Angie Abdilla Season 1 Episode 2

Send us a text

In this episode, we discuss how the data sets used in machine learning adopt hegemonic discourse and existing biases and, what’s more, amplify them, and we talk to some of the people out there who are fighting back. Our guest is Angie Abdilla - a palawa woman, founder, and director of Old Ways, New, whose methodology Country Centred Design, utilises Indigenous knowledges in the design of places, experiences, and critical technologies.

Resources:
Atlas of AI but Kate Crawford
Out of the Black Box: Indigenous Protocols for AI by  Angie Abdilla, Megan Kelleher, Rick Shaw, Tyson Yunkaporta
Beyond Imperial Tools: Future-Proofing Technology through Indigenous Governance and Traditional Knowledge Systems by Angie Abdilla

Host & Producer: Ana-Maria Carabelea
Editing: Ana-Maria Carabelea
Music: Karl Julian Schmidinger
________________________

Host & Producer: Ana-Maria Carabelea
Editing: Ana-Maria Carabelea
Music: Karl Julian Schmidinger

The Digital Deal Podcast is part of European Digital Deal, a project co-funded by Creative Europe and the Austrian Federal Ministry for Arts, Culture, the Civil Service and Sport. Views and opinions expressed in this podcast are those of the host and guests only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor the European Education and Culture Executive Agency (EACEA) can be held responsible for them.


Ana-Maria Carabelea: Welcome to The Digital Deal Podcast, the series where we talk about how new technologies reshape our democracies and how artists and critical thinkers can help us make sense of these changes. My name is Ana Carabelea, and for the second part of this episode, I'm joined by Angie Abdilla. Professor Angie Abdilla is a Palawa woman, founder, and director of Old Ways New, whose methodology, country-centered design, utilizes indigenous knowledges in the design of places, experiences, and critical technologies. Angie is also a professor at the Australian National University’s School of Cybernetics and co-founded the Indigenous Protocols and Artificial Intelligence working group.
In the first part of the episode, we spoke to Kasia Chmielinski and Ndapewa Onyothi about the work that they do to fight the issues with the datasets. Angie, I will start by asking you the same question, which is, what are the issues with the data sets from your point of view?

Angie Abdilla: I don't know that data is necessarily the problem. I think it's the people that are the problem. I think this whole hysteria that happens around the lack of governance and regulation and so forth, and the lack of transparency and so forth, and the machines are sentient, which they're not - all of this is really forgetting the simple truth, which is that there are humans that are responsible every step of the way, whether they acknowledge their accountability and responsibility, and whether their practices are ethical, or whether they even care to ask the question: how do my decisions affect the system that I'm working to create and the impact that will have on communities and environments? That's the question all of us need to be really cognizant of when we're working with technologies, especially automation, where currently there are a lot of opportunities to overly complicate things, to create some smoke and mirrors about the real problems, which often come back to the business models.

Ana-Maria Carabelea: Acknowledging that there might be gaps in the data is just the first part to get algorithms to see or make sense of the world better. Then there's also the next part, which is interpreting the data and searching for these patterns, and sometimes perhaps seeing something that's not there just because it needs to be seen. Can we unpack these two processes that are part of the algorithmic production of knowledge and how that might impact what we define as truth?

Angie Abdilla: Yes, I think the questions that I always ask is: What did our old people do? And when we're talking about data, I think a lot of people assume that it's non-cultural that it's non-biased, that it's just plain data. And in fact, there's a whole lot of other things going on within data. If we think about various different philosophical approaches to knowledge and power, information, truth, there's a lot I think that we need to come back to, before we start assuming that there are no implications within these, sort of benign, data sets. So when I think about data, what I'm thinking about always is:  Who does it belong to? How was it captured? What were the questions asked? Who was asking the questions? If there were questions even being asked? I want to know everything about that data set before I'm even willing to touch it. And so it makes my world very difficult, because trust me, there's not a lot of good data sets out there, and when you start asking those pertinent questions: Where does this come from? What's its origin story, its provenance? Who was involved, and why? If it was funded by who, why? How else has it been used and why? Whose data is it, and have they given permission? For what purpose? And is the intended use aligned with that permission? When you start asking all of those questions, it makes it very difficult to even start the process of designing any particular system. So, I think it's difficult, it's really, really difficult to get good data, clean data, ethical data, safe data. But I also think that it's an incredible opportunity to start developing some really good practices around the collection of data because we do need good data. So how do we do that? And how do we create culturally safe data sets or data sets that have been created with more-than-human in mind? That's the question that I'm always coming back to. And I guess the other thing is, what I'm starting to learn is that the common practices within the domain of data sciences are just really problematic. What we need to think about is the broader systems. What is the data for, and what's its business model, and what's the intention of this form of automation? Who's benefiting, how, and why? And how can we ask much more critical questions about the broader automated system, but also where that system is going to end up? Is it going to be part of a series of products that we're unaware of? And so I think the practices around how we do what we do is what matters. And that's what all of our work has been about. What are the methodological practices, how do we do what we do? And then in the process, start defining best practices, and that is a best practice that is deeply rooted by a cultural set of protocols.

Ana-Maria Carabelea: When you don't know these things, it can seem like the data processing will give you some generic information about how the world works. When you don't know what these questions were, you don't have an answer to the questions you were just asking. Whereas when you start asking those questions, you start defining them and you start having answers to them, the information that the data gives you, in the end, becomes situated, and it doesn't have this universality attached to it.

Angie Abdilla: I guess it's untethered. It has no relational context or origin story. Yeah, relationality, I think that's one of the biggest issues that I have as well. Everything in our culture has to always come back to relationality and responsibility. The cultural protocols that are part of our culture ensure that there is transparency, accountability, and reciprocity baked into everything. And so there's always provenance or a chain of information or connection to how this information has come to you. And you're only given that information when you're proven that you're responsible to caretake for it. That comes through a bunch of different initiation processes over time, over deep time, so there's none of this kind of presumption that information or knowledge is just free for all. That does not exist in our culture for a good reason, because certain information, knowledge needs care and so you have to be able to prove you're at a certain level of cultural authority, but also responsibility to be able to have that knowledge. Being an oral culture, that's how our knowledge has existed over 60,000 years. This is incredibly important when we think about information architecture, systems and knowledge transfer, and different sorts of protocols - like programming protocols, that exist within these systems - and how they function as particular parts of a whole system. But the system itself has no capacity to identify those origin stories or the transparency factor. Imagine if we had AIs - a bunch of different types of AIs that work in a way in which these different knowledge systems occur. So, in a way where knowledge - not just bits of information, but deep knowledges that can then be organized in a cohesive way - is used to automate a variety of things. It's possible, but I think there's a lot of care that's needed in that process. And who's involved matters: have they got the right capacity to understand what the questions are? What are the criteria for success? Ultimately, because of the business models, people are typically always driven by productivity, which is so much part of the problems. Of course, there is a huge amount of issues within the current practices of generating data and the utilization of those existing data sets. But I think it's more and the bigger problem is the broader system. How do we start asking better questions about not just the data, but the broader system?

Ana-Maria Carabelea: So essentially we have to go back to rethink innovation, and redesign these systems so that they incorporate different ways of doing things, different ways of processing, and different ways of producing knowledge.

Angie Abdilla: Yeah, I mean, imagine if we had some sort of assurance in the types of automated systems that we were creating, imagine that the knowledges that they're responsible for generating had assurance built in. And assurance is a tricky word in this domain. I guess from an indigenous cultural perspective, that's how our culture has existed. If you think about the type of governance that has existed within our country - it's polycentric, many different centers. For a type of technology like AI - which is not one technology, it's a whole disparate set of different types of different components in a complex stack that is more and more complicated - governance and regulation as people are talking about now, is almost ridiculous. How do you govern something that is not tethered anywhere- I'm talking about nation-states and the types of legal systems that are currently at play. Think about our culture, our history, and our law, which has existed for over 60,000 years and has existed across this complex map of 250 different nations with no segregated law enforcement. It's a law of the land, but it's also a system of law that has had the capacity to regulate all human behavior in a way - how we relate to each other, and how we produce incredible science, and technology ingenuity. And that's all happened within the system of law and regulation and governance and a code of ethics that comes from these principles of social and environmental sustainability that are baked into everything. There are no language terms for these things because it's like reciprocity. So coming back to the system, it's not just data that's the problem or the practices within it. It's a bigger systemic issue, and it always comes back to humans.

Ana-Maria Carabelea: Tell me a bit more about the work that you do and how country-centered design works. And what is that responding to?

Angie Abdilla: Well, I guess the intention is that this methodology - this process of four different cycles - supports a different way of approaching systems. But it's also supported a whole bunch of different types of projects, and one of them is an initiative called The Indigenous Protocols for AI. And over five years, we've done a whole bunch of different things within that initiative. The next phase is a documentary and we're exploring and talking to a bunch of different indigenous knowledge holders and traditional custodians and elders around the world about how their different traditional design and engineering practices for traditional forms of automation could help us solve some of these complex problems that we are facing within the current domain of AI. But also acknowledging that the original foundational building blocks of AI as we know it now, in my opinion, is really problematic.
So I'm also really mindful that we don't continue to pour our time and energy into solving essentially very extractive, colonial forms of technology. We're mindful of this research being able to support new forms of automation, so we're exploring different cultural practices and different design and engineering principles that are innately connected to this symbiotic connection to country and those various, complex knowledges that are all about the social and environmental sustainability of all things, all time. How can those knowledges produce a different form of automation? And could that in turn produce different types of outcomes, outcomes that have also those same principles embedded within them? Part of that is not just taking their knowledges and then subsuming them into particular computer science practices that are not aligned. It's about really interrogating how we do what we do. So, of course, how information, how different traditional knowledges translate into data is a big question, and I don't know if we've got the answers to that yet.

Ana-Maria Carabelea: Probably worth to keep looking for it though. Kasia [in part one of this episode]  also mentioned that perhaps more is not better and that huge amounts of data might not necessarily be better, but we just need better data, or as you say, better questions about the data, better interrogation around what that data is and what's it for and so on.

Angie Abdilla: I would like to see some form of licensing or some sort of assurance that the data set has been created by this person, within this context, with all of those questions answered, so I know that is a data set that is not going to meet my needs, that I'm not willing to touch and I don't want part of the systems that I'm creating in any way. This is something that's come up in Australia. I've been involved in some discussions with the federal government about what the position is for the federal government on regulation and what should it be. A couple of ideas that were proposed were the licensing of data sets, but also licensing of models. I think it could be a really interesting idea to give assurance. But I'm also really interested in what it would look like if we were to develop a culturally safe data set. What does that mean and what does that look like? And what types of culturally safe data sets would be useful? Over the years, we've spent so much time searching for good data and typically the data sets that we feel comfortable using have been generated by indigenous rangers. It’s usually through CSIRO - the largest scientific body in Australia, or it's through the rangers’ activities, and so they typically are good data sets because they are being documented in language as well. I think the problem often comes back to these different worldviews. The way we understand the world is not through these silos of knowledges in our culture. Within our worldview, everything has relational connection, so we don't have these separate domains of sciences and health and what have you. So, when we think about data, it would be a really different type of information and information management system that would be appropriate if we're thinking about how knowledge exists within our worldview. This, I think, is a really great opportunity to explore. How could automation occur differently if we have these different types of information systems, information architectures, and data sets that have relationality and are not just relevant to one particular, very limited area of knowledge, but actually have a much more complex relational nature to those knowledge sets.

Ana-Maria Carabelea:  I’d be curious to know what support you would need for your work to flourish.

Angie Abdilla: Well, it's an interesting one because I've asked this question many times. And it's a really problematic one because it comes back to the fundamental problem of these technologies - it's the growth problem. Many years ago, I would have originally assumed we need lots of money and we need industry support. And when I say industry support, like the big tech companies to actually start listening to this and actually taking some of our research on board. I don't think that anymore. I've had a very big reality check about growth. With unsustainable growth, you lose cultural integrity, and that's not negotiable for us. So it's an interesting one. I think, for our work to flourish, what we need is more community part of this work and that's what we're focusing our efforts on - supporting the next generation of indigenous technologists to be able to explore and experiment and test things in different ways, in culturally safe places and environments that enable us to develop a different way of working and different types of practices. But I think that has to be done with so much care, and there's a lot of risk that if it's in the wrong hands, it could be exploited for the wrong reasons. And we've had many situations where we sort of had to grapple with that.
But the other really interesting part of that is that cultural knowledge, really important cultural knowledges - I don't want to say this definitively, but most of the time - they're protected by different cultural protocols. I could be given a whole bunch of really important sacred knowledge in a format that could be really incredible if I was aware of the other associated knowledges that are critical to understanding a particular algorithm. But without all of that broader contextual knowledge, it doesn't mean anything. And so, in a lot of ways, sacred knowledge is sacred for a reason. It's only communicated once you've got the right cultural authority to be able to care for it. What's incredibly important to learn from all of this, is that if we want information to be cared for, and if we know that certain knowledges are sacred, then there are different ways of protecting them. And in our culture, it's embodied and embedded in the muscle. And being an oral culture and a performative culture, it means that it's in the language, and it's in the stories, and it's in the dances, and it's in the songs. And so, that's the only way that it can be known, in the way it lives in you. Which is beautiful, right?

Ana-Maria Carabelea: I was thinking about this: the computable versus the things you can't compute, and all the things that are being lost in that translation. We probably think that everything that's on the Internet is everything, but if we do get to that step, where we recognize, okay, no, there are so many pockets of reality that aren't online, there's so much more out there and this is not (the entire) reality. Once we start integrating those aspects, is that still enough? Is there not so much that's being lost in that translation, in trying to compute everything and make sense of things in that way?

Angie Abdilla: Well, it's a really interesting one, because I think we know what's true in the end. There is resonance with something that is part of our culture. Like when I've been lucky to be part of different ceremonies, there's something really powerful in those processes. Those are cultural practices, there's truth in them. And that's, I think, something really interesting that we get further and further from. The more that technology subsumes this information, data tries to purport the creation of new knowledges, there's a big gap, that we're creating, in a way. I think there's the potential for it to be quite cataclysmic, that gap between true knowledge that has a relationality and exists in a different cultural paradigm, and that of the untethered nature of automation.

Ana-Maria Carabelea:  At the end of the episode, I usually ask all my guests to send listeners off with a homework and name an article, a research piece, a book, an artwork, or an exhibition that you've recently seen or read and you think others should too.

Angie Abdilla: Look, I think one of the most seminal books that I've read recently is Anatomy of an AI by Kate Crawford, who is just phenomenal. And she's making a new work once again, also interrogating the relationship of power within these systems. So really excited to be a co-resident with her alongside a couple of other people - there are four of us who are residents at the School of Cybernetics at ANU, the Australian National University. And all of us are doing different types of work, but I think what Kate's already contributed is quite phenomenal, actually. That book is quite something. Have you read it?

Ana-Maria Carabelea: Yes. I like that it just underlined the materiality of it all because we tend to think it's just the Cloud, this intangible thing, just flying around, when actually it's very resource intensive.

Angie Abdilla: Absolutely. And that's the thing that no one's really talking about, the actual environmental costs. It's staggering. And the data farms being developed for under the sea, it’s baffling to me, absolutely baffling. And all of this seemingly invisible nature of it all is part of the problem, I guess. It's incredibly important to be far more aware of how these tools are having an effect on our lives and of the slippage. There's a lot of slippage going on because it's happening incrementally. We're not really able to see the effects of it. But it's so monumental when thinking about the scalability of this technology and how ubiquitous it is becoming through 2nd, 3rd, 4th, 5th tier service providers that are seemingly invisible. Yeah, it's mind-blowing.

Ana-Maria Carabelea:  That's it for today. A big thank you to our guest, Angie Abdilla, and all of you for tuning in. The next episode comes out at the end of March when we'll be talking to Fabian Scheidler, Nina Jankowicz, and Marta Peirano. The Digital Deal Podcast is part of the European Digital Deal, a three-year project co-funded by Creative Europe. If you want to find out more about the project, check out our website, https://ars.electronica.art/eudigitaldeal.

People on this episode