
Submitted by Diane L. Lister on Mon, 19/05/2025 - 10:44
With the world facing a biodiversity crisis, conservationists need accurate evidence to help them decide how best to protect species and habitats. But the information they require is scattered across thousands of scientific studies. Can AI tools like Chat GPT help them find it more quickly and reliably than humans?
That's the question a multi-disciplinary research team has been looking into. It was led by a first-year undergraduate here, Radhika Iyer, while she was participating in the University's Undergraduate Research Opportunities Programme last summer. Today (12 May 2025) her paper on it has been published in Public Library of Science journal PLOS One.
The team united researchers from this Department and the Cambridge University Department of Zoology in exploring the potential for AI to help conservationists find the right information on actions to support biodiversity. They were testing Large Language Models (LLMs) such as Chat GPT on how well they answered questions on the effects of conservation actions.
The models were using different strategies to search for answers in Cambridge University's Conservation Evidence database, a compilation of evidence on what does – and doesn't – work well for conservation.
"Even though we have these summaries, for conservationists, finding the right information quickly can be a challenge," says fellow team member Dr Alec Christie, a Visiting Researcher in the Department of Zoology, and a Research Fellow at Imperial College London. "That means that this knowledge is still not being used to its full potential to conserve biodiversity."
Radhika got involved in the project as an intern on the University's UROP programme. This ten-week summer programme allows students to gain a working insight into the research being undertaken by academics and provides a unique opportunity to develop some of the technical and transferable skills needed in a top research environment. In her case, Radhika says that "what drew me to it was the prospect of seeing ways in which AI could be applied to conservation research."
"The results show that carefully designed AI systems have the potential to act as expert-level assistants that can quickly point conservationists to relevant evidence to address their specific problem."
Dr Alec Christie
Avoiding mistakes, 'hallucinations' and bias
The team knew that while AI tools like ChatGPT, Claude and Gemini are very good at processing information to answer questions and summarise text, they can also make mistakes, 'hallucinate' (invent facts) or reflect biases from the data that they were trained on. The researchers were therefore exploring ways to harness the power of AI while removing the risk of AI-generated misinformation that could lead to harmful conservation decisions being made.
So they developed an experiment in which one LLM, the Claude 3.5 Sonnet, was used to automatically generate multiple choice questions based on information in the Conservation Evidence database. The questions were then set as a test to 10 different LLMs, including Llama, Gemma2, Mixtral, Gemini, Claude and Chat GPT models. The tests ran under different conditions, including one where models were allowed to use only their own pre-existing knowledge, and others where they had access to the original source document.
Comparing AI tools with human experts
Three retrieval methods were trialled: sparse retrieval (where the LLM searched for keywords), dense retrieval (where the search was based on the meaning or semantics of the question) and hybrid retrieval (a combination of both). Six human experts from the Conservation Evidence team were also asked to take the same exam to find out if the best AI tool and setup could match their performance. As the paper reports, the results were encouraging.
"They showed that AI can produce human expert-level performance," says Alec, "but they also showed that setup is everything. When using the best setup (the hybrid strategy), several of the LLMs answered the questions with the same, or slightly better, accuracy as a human expert. All the LLMs that were tested did vastly better than just random guessing." They also answered faster than humans.
He adds: "The search strategy was also critical. The hybrid approach significantly outperformed both keyword-only and meaning-only searches in both finding the correct document and helping the AI answer correctly. Its ability to find the right document was also on par with the human experts."
Alec says: "The results show that carefully designed AI systems have the potential to act as expert-level assistants for accessing specific evidence from databases like Conservation Evidence. Imagine an intelligent search tool that quickly points conservationists to the most relevant evidence to address their specific problem.
Information health warning
"However," he warns, "our findings also come with a strong dose of caution. Simply plugging a question into a general chatbot is not the way to get reliable evidence-based answers. The setup – particularly how the system retrieves information – is crucial to avoid poor performance and misinformation."
There are now plans to follow up these findings and see how AI performs when tasked with more complex, open-ended questions that require nuanced thinking and reasoning.
As the researchers say, this approach may also be expanded to more databases in other fields and disciplines.
Meanwhile, of her time working on this project, Radhika says she found it very enjoyable. "What drew me to it was the prospect of seeing ways in which AI could be applied to conservation research," she says. "And implementing the project while working with an interdisciplinary team was a great experience."
- The research team work also involved Sam Reynolds and William Sutherland from the Department of Zoology, Sadiq Jaffer and Anil Madhavapeddy from this Department.
- Radhika Iyer conducted the research as part of a summer undergraduate project at Cambridge, supported by the AI@Cam project and the UROP scheme, as well as an unrestricted donation from Tarides.
- Sadiq Jaffer was funded by an unrestricted donation from John Bernstein.
- Alec Christie was funded by an Imperial College Research Fellowship.
This article was originally published by Rachel Gardner on Thursday 15th May 2025:
https://www.cst.cam.ac.uk/news/can-ai-offer-better-conservation-advice-human-experts
A related article about this paper has been published on the Department of Zoology website: