After years of scandals and justifiable outcry surrounding the use of exploitative and degrading images in global health communications it seemed the debate had been settled. Portraying glaring power imbalances between Global North and Global South through “poverty porn” is unethical, perpetuates negative stereotypes, and harms individuals and communities, so stop using it!
Where most took this as a clear message that the use of these kinds of images should be seen as a dark stain on the global health ecosystem’s past, others apparently took the opportunity to seek loopholes and other ways to use human suffering to garner clicks and raise funds – enter artificial intelligence (AI).
Researchers commenting in The Lancet Global Health recently exposed a worrying trend of global health organisations turning to AI to create poverty porn images to support campaigns. While the use of AI in itself could be defended as a way to spare budgets at a time of unprecedented cuts to funding, the fact that the campaigns reverted to replicating the same stereotypes and damaging narratives of the past is deeply worrying. That’s not to say that all global health organisations have resorted or have even contemplated this sort of action. Indeed, every one of the colleagues I’ve spoken to about it is as appalled as I was. But the fact that poverty porn, whether AI generated or not, is even still part of the conversation is in many ways quite depressing.
Public consciousness of poverty porn and white saviour stereotyping reached a peak in around 2017, when Comic Relief in particular came in for major criticism for campaigns featuring white celebrities visiting deprived and war-torn regions. While Health Action International has always eschewed reverting to these forms of imagery (or at least attempted to), it was around this time that we made explicit in our brand guidelines that images we use must be positive and optimistic. They should be emotional but never resort to cheap stereotyping. And they should empower the subjects portrayed.
While we can be proud of being early adopters, like any organisation we’re also learning along the way. For example, it wasn’t until 2023 that we set what we already firmly believed in stone through a dedicated Image Use Policy. Being an evidence-based organisation, it had to be a research paper that inspired this. Charni, E. et al’s 2023 paper in The Lancet Global Health gave us the extra research-based push to do this, and their framework and standards inspired our own decision tree to avoid biases, misrepresentation and stigmatisation through the images we use.
Now, just two years on, and these AI technological advances mean we’ve also had to reconsider what’s in our policy. This week we published our updated Image Use Policy, which now includes a specific section on AI. Central to this remains the principle that “representations of people across images should be equitable, accurate, and serve to counter stereotypes”.
Rightly or wrongly, sometimes it does take the sort of publicity that AI-generated image use has stirred up to spur action. And while I can’t stress enough that this was a route that we would never have strayed down, it remains important to these principles are set in ink to avoid little mistakes that can cause great harm. Perhaps it will also inspire others who haven’t quite yet caught up to also think twice when delivering global health communication campaigns.
For now, we continue to work with the communities we serve to ensure that they are portrayed in the unbiased and positive light they deserve. We know others are doing the same, while some are still learning the lessons. Either way, we continue to make progress in squashing harmful narratives and stereotypes – change takes time, but we will get there in the end.