The UK’s global AI Safety Summit took place at Bletchley Park on Tuesday 1 November, and we were there demonstrating how AI is transforming conservation.
Joined by some of our partners, and a few fluffy friends, we showed 300 of the world’s tech influencers and policymakers - attendees included Elon Musk and Secretary of State Michelle Donelan - how AI is making a difference to our work protecting species.
It has the potential to do so much more with the right support.
We’re working at the cutting edge of technology to protect species and restore ecosystems - here’s how…
Wildlife Insights – Using Computer Vision to power conservation
Camera traps – rugged, battery-powered digital cameras which are triggered by animal movement – are vital for monitoring threatened wildlife, especially in remote environments. But manually analysing the millions of images generated slows down our conversation efforts.
AI is now transforming our ability to process this vital stream of data. Computer Vision – AI which replicates the way humans make sense of what they see – can automate the identification of species, sometimes even identifying individual animals, while rapidly removing accidental images without animals in the frame.
Wildlife Insights, a major partnership between us, Google and seven international NGOs - including Conservation International, and WWF - is making cutting-edge computer vision tools available to conservationists worldwide via a free, online platform.
It currently hosts more than 95 million camera images, representing more than 3,000 species in 95 countries.
By pooling images from camera traps around the world, Wildlife Insights is also creating a global community where anyone can use this vital data to monitor wildlife and influence policy at all levels.
SMART PAWS – Predicting threats to wildlife in the world’s protected areas
Protected areas are the cornerstone of global efforts to conserve biodiversity, with around 17% of the world’s land and inland waters currently under protection. Yet rangers working on the frontline in parks frequently lack the resources and information needed to respond effectively to wildlife threats, such as poaching.
SMART, developed by ZSL and a partnership of nine international NGOs, is a ground-breaking solution to these challenges: an integrated suite of technologies designed to empower park staff to better monitor and understand threats to wildlife and manage their limited resources more efficiently. Being used at more than 1,200 sites in more than 100 countries - with 24 countries adopting SMART nationwide - SMART is the leading tool for protected area management globally.
Nelson Enyagu, a ranger working on the frontline in the Bwindi Impenetrable National Park in Uganda, joined ZSL at the Summit to talk about how SMART tech makes his team’s poaching patrols more effective, while Harvard University computer scientist Lily Xu demonstrated how the power of AI is amplifying these efforts:
Researchers at Harvard have created PAWS (Protection Assistant for Wildlife Security), which uses SMART data, machine learning, and game theory to predict when and where poaching is likely to occur and plan optimal patrol routes accordingly - so that rangers can better target their enforcement activities.
Initial testing in Cambodia and Uganda enabled rangers using SMART and PAWS to find and remove almost five times as many poaching snares as usual.
The sound of saving wildlife – monitoring the conservation soundscape using AI
Acoustic monitoring performs a similar role to camera trapping, but can monitor biodiversity and wildlife threats on a much wider scale. While the information gained from a camera trap is limited to the information captured in an image taken from a fixed point, acoustic monitoring can cover much more ground.
The recent development of low-cost acoustic loggers has led to widespread uptake of the technology, particularly in remote locations where they can be left to collect data unattended over vast areas.
This results in huge quantities of data, with thousands of hours of recordings often produced from single surveys. Until recently, these sounds were listened to and inspected manually - a labour-intensive process that limited its use for conservation.
AI now provides an automated means of quickly processing acoustic data and identifying sounds of interest. Using the visual representation of a sound – a spectrogram – we can use Computer Vision to recognise target sounds, such as the call of a particular species, or even human activity, like armed poaching and illegal logging.
In collaboration with Google Cloud, we are developing cloud-based AI pipelines to rapidly process large acoustic datasets from our field sites, and applying the technology to inform conservation - from partnering with Network Rail to monitor the wildlife living alongside Britain’s railways, to tracking armed poaching in the tropical forests of Cameroon.