The New Zealand Tech Alliance is a group of independent technology associations from across New Zealand that work together to ensure a strong voice for technology.Visit Tech Alliance
By Mitchell Pham, Chair of the Digital Council for Aotearoa New Zealand.
Last week we started an intense period of analysing the findings from our workshops on trust in automated decision making. People from across Aotearoa have shared their experiences, hopes and challenges around automated decision making through our participatory design workshops.
Our job now is to honour what they told us.
We’ve sought out those who are not often heard (or who often may not feel heard) in digital decision-making — Māori, Pasifika, people with disabilities, ethnic communities. Because we believe that, in designing for them we design for everyone.
Members of these communities were at the centre of our minds last Friday as Council members, the Toi Āria research team and Brainbox started digesting and making sense of what we’re hearing and noticing. To help us revisit the voices of those we heard from, we placed ourselves on the Comfort Board — a tool we’ve been using to help people articulate their levels of trust in relation to automated decision making scenarios.
Some things are very clear
- algorithms to support rather than replace human roles in decision-making
- a seat at the table when algorithms are created, used and monitored in relation to their own people and matters that are important to them
- a non-deficit approach to using algorithms, with a focus on what’s important to them, not what’s “wrong” with them
- algorithms that consider a wider context that draws from real whānau stories — mana enhancing kōrero for whānau to contribute to
- transparency around the data that feeds algorithms, what algorithms are doing, who is making the decisions that affect people’s lives, and what will be done with the data they use and collect
- a human-centred approach to using algorithms, with those humans having clear, good intent
- a wish for algorithms to only be used for tasks that don’t require human discretion and empathy
- ongoing monitoring to ensure algorithms remain relevant and appropriate for use
- to be given some level of control regarding algorithms that may use their data.
Some things are not so clear
Of course, if the answers were easy, we’d be doing them already. Algorithms can simultaneously provide enormous benefits alongside the potential for enormous harm. Humans and computers can both make harmful decisions. There may be times when it’s better that a computer makes a decision without a human bias. Then there are tradeoffs between accuracy and fairness. An algorithm can accurately record data, but is it fair that the data is recorded or used? When and how do we record what matters to people rather than what’s the matter with them? When and how do we show progression rather than moment-in-time?
As we’ve been exploring trust in automated decision making, we’ve also been questioning our own questions. Why are we focused on building trust? Is trust enough? Human rights and ethics also come into play — just because people trust something, doesn’t mean it’s ethically sound. If we reach a state of improved trust in automated decision making, will that shift in trust have further effects that we haven’t even anticipated yet?
A pathway through
The Council has a path for working from these tensions and dilemmas towards some realisable and tangible outcomes.
- As we start to shape what could be, we’ll be looping back into those communities represented in the workshops to unpick the possibilities.
- We will be doing a deep dive with Māori Data Sovereignty experts to listen to, explore and understand the historic and current issues of trust and trustworthiness in automated decision making from te ao Māori perspectives.
- We’re keeping a watching brief on what’s happening in other jurisdictions, especially the Centre for Data Ethics and Innovation in the UK and the Directive on Automated Decision Making in Canada.
- We’re examining the progress of existing policies and guidelines in Aotearoa (for example the Algorithm Charter) — where they’re getting traction, where they’re not, and why.
One of the most important pieces of advice that we’ve been given from New Zealanders, and a key learning from overseas jurisdictions is that communities should be involved in determining the solutions and in implementing them.
As a Council we’re committed to being that bridge between communities and decision makers.
We’re aware that both would like to see quick action. This will be balanced against getting some traction and action that is systemic and long lasting. We don’t believe in picking the low hanging fruit and thinking a job is done. We believe in nurturing the tree that grows the fruit, and nurturing its surrounding environment. This is the message we’ll be taking to the government.
- This week we continue intense analysis of the findings of the workshops on trust and automated decision making. We’ll be looping back into those communities who were involved in our workshops and connecting with Māori Data Sovereignty experts. The Disabled People’s Coalition will have input into our recommendations and will be peer reviewing our report to the government.
- The full Digital Council meets on Friday 23rd October to further work through the trust and automated decision making workshop findings and ways forward. We’ll also be considering our digital inclusion workplan for next year.
- Last week we met with the Data Ethics Advisory Group to understand their work and their role and where the Council can support their work.
- Next Tuesday we convene an online Pasifika Town Hall seeking the perspectives and experiences of the Pasifika community around digital inclusion.