AI could revolutionise environmental planning – if we don’t get trapped in the ‘iron cage of rationality’
- Written by Matthew Henry, Associate Professor in Planning, Massey University
Increasingly low-cost environmental sensors coupled with AI-powered analytical tools dangle the promise of faster and more insightful environmental planning.
The need for better decision making about the way we use ecosystems and natural resources is even more urgent now because consenting changes proposed under the Fast-track Approvals Bill[1] require faster assessments.
As part of our research at Kuaha Matahiko[2], an open-access and collaborative project to compile data about land and water, we found a real thirst to engage with AI among iwi and hapū (tribal) groups.
Overstretched environmental kaitiaki (guardian) organisations saw the possibility of AI helping to integrate fragmented environmental datasets while also quickly and cheaply improving analytical capacity.
Based on this need, the Kuaha Matahiko project developed a working AI, trained on environmental data from Aotearoa New Zealand. This shows a tipping point is emerging where bespoke AI is quickly becoming a realistic option for kaitiaki groups, even small ones.
However, care is needed. Prior experiences[3] show algorithm-powered systems often lock us into pathways that reproduce existing inequalities in data gathering and foreclose imagination about outcomes.
These problems often occur because of two interlinked problems: a legacy of ad hoc data gathering and an often misguided belief that larger data volume equals better accuracy.
The ‘precision trap’
First, useful AI systems require rich data at speed and volume. The Parliamentary Commissioner for the Environment has warned successive governments that New Zealand’s environmental data system is ad hoc, opportunistic and under-resourced[4].
Existing environmental databases largely reflect the priorities of state-led agricultural science and recent efforts to monitor its environmental impacts. Our environmental data also suffer from a systematic ignorance of mātauranga Māori[5].
Long-running environmental datasets are very valuable. But they are very incomplete in their coverage of places and problems, and we cannot go back in time to redo the data gathering. Recognising the gaps and biases created by a history of uneven, exclusionary data generation is critical because it is those data (and the embedded assumptions) that will be used to train future AIs.
An agricultural robot working in a smart farm, spraying fertiliser on corn fields[6]Second, AI promises certainty and precision. But a study investigating precision agriculture[7] describes the emerging risks when we mistake high volume and granularity of big data for high accuracy. An exaggerated belief in the precision of big data can lead to an erosion of checks and balances.
This becomes an ever bigger issue as the murkiness of algorithms increases. Most algorithms are now unintelligible. This stems from technical complexity, user misunderstanding and intentional strategies of developers. It blinds us to the risks of inaccuracy.
By not paying attention to the opacity of algorithms we risk falling into a “precision trap”. This occurs when belief in AI’s precision translates into an unquestioning acceptance of the accuracy of AI outputs. This is a danger because of the political, social and legal value we give to numbers as trusted expressions of objective “hard facts”.
These risks rise quickly when AI systems are used to forecast (and govern) future events based on precise but inaccurate models, unmoored from observations. But what happens when AI outputs are the basic fabric of evaluation and decision making? Do we have the option of not believing them at all?
Avoiding an ‘iron cage’
One possible future lies in what the German sociologist Max Weber called the “iron cage of rationality[8]”. This is where communities become trapped in rational, precise and efficient systems that are simultaneously inhuman and inequitable.
Avoiding this future means proactively creating inclusive, intelligible and diverse AI partnerships. This is not about rejecting rationality, but about tempering its irrational outcomes.
Our evolving data and AI governance framework draws on the principles of being findable, accessible, interoperable and reusable (FAIR[9]). These are very useful. They are also blind to social histories of data gathering.
The failure of the 2018 Census[10] is a stark example of what happens when historical inequalities are ignored. We cannot redo the environmental data we have. But new AI systems need to have an awareness of the effects of past data gaps embedded in their design. It might also mean going beyond awareness to actively enriching data to address holes.
Widening the worldview of AI
Data and AI need to serve human goals. Indigenous data sovereignty movements are claiming the right of Indigenous people to own and govern data about their communities, resources and lands. They have inspired frameworks known as CARE[11], which stands for collective benefit, authority, responsibility and ethics.
These offer a model of empowering data relations that put flourishing human relationships first. In Aotearoa New Zealand, Te Kāhui Raraunga Māori[12] was established in 2019 as an independent body to enable Māori to access, collect and use their own data. Their data governance model is an example of these CARE principles in action.
An even bigger step forward would be to expand the worldview of AI. Serving human goals means exposing the assumptions and priorities baked into different AIs. This in turn means opening up the development of AI beyond what’s been described as a “WEIRD” – western, educated, industrial, rich, developed – standpoint which currently dominates the field.
Training an AI on environmental data from Aotearoa New Zealand for Māori organisations is one thing. It is a more radical thing to create an AI that embodies mātauranga Māori and the responsibilities for life embedded in the Māori worldview.
We need this radical vision of AI, deliberately built from diverse worldviews, to avoid locking the cage and foreclosing the future.
References
- ^ Fast-track Approvals Bill (www.beehive.govt.nz)
- ^ Kuaha Matahiko (ourlandandwater.nz)
- ^ Prior experiences (www.youtube.com)
- ^ ad hoc, opportunistic and under-resourced (pce.parliament.nz)
- ^ mātauranga Māori (e-tangata.co.nz)
- ^ An agricultural robot working in a smart farm, spraying fertiliser on corn fields (www.gettyimages.com.au)
- ^ study investigating precision agriculture (www.sciencedirect.com)
- ^ iron cage of rationality (www.thoughtco.com)
- ^ FAIR (data.govt.nz)
- ^ failure of the 2018 Census (berl.co.nz)
- ^ CARE (www.gida-global.org)
- ^ Te Kāhui Raraunga Māori (www.kahuiraraunga.io)