"Poverty Pornography": Fake AI-generated images are being used by aid agencies

By: Aisha Down / The Guardian (title: AI-generated 'poverty porn' fake images being used by aid agencies)
Translation: Telegrafi.com
Artificial intelligence [AI]-generated images depicting extreme poverty, children and survivors of sexual violence are flooding online photo galleries and increasingly being used by leading public health NGOs, say global health professionals who have expressed concern about a new era of so-called “poverty pornography”.
"They're being used everywhere," said Noah Arnold, an employee at Fairpicture, a Switzerland-based organization that promotes the ethical use of images in global development. “Some are using AI-generated images all the time, and others - we know for sure - are at least experimenting with them.”
Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp who studies the production of images for global health, said: “These images reproduce the visual grammar of poverty – children with empty plates, cracked earth, stereotypical appearances.”
Alenichev has collected more than 100 AI-generated images depicting extreme poverty that have been used by individuals or NGOs - as part of social media campaigns against hunger or sexual violence. The images he shared with The Guardian show exaggerated scenes that reinforce stereotypes: children huddled in muddy water; an African girl in a wedding dress with a tear streaming down her cheek. In an article published Thursday in Lancet Global Health, he argued that these images represent a new version of “poverty porn 2.0”.
Although it is difficult to determine the prevalence of these AI-generated images, Alenichev and others say their use is growing, driven by concerns about consent and costs. Arnold added that US funding cuts to NGO budgets have further exacerbated the situation.
"It's quite clear that various organizations are starting to consider using synthetic images instead of real photos, because they are free and you don't have to deal with consent, permission or other complicated issues," Alenichev said.
AI-generated images depicting extreme poverty are now appearing in large numbers on popular photo sites, such as Adobe Stock Photos and Freepik, in response to searches for the word “poverty.” Many of them carry descriptions such as: “Children in a refugee camp”; “Asian children swimming in a river filled with garbage”; and “A white Caucasian volunteer provides medical consultation to young black children in an African village.” Adobe sells licenses for the last two photos on this list for around 60 pounds [approximate value of 70 euros].
"They are so loaded with racism. They should never be allowed to be published, because they represent the worst stereotypes about Africa, India, or any other country you can name," Alenichev said.
Joaquín Abela, executive director of Freepiksaid that the responsibility for using such extreme imagery falls on media consumers, not platforms like his. The AI-generated photos, he said, are generated by the platform's global community of users, who can receive a license fee when its customers Freepik-ut decide to buy their images.
He added that Freepik-u had tried to curb the bias it had encountered in other parts of the photo library, by "injecting diversity" and trying to ensure a gender balance in the images of lawyers and executives published on the site.
But, he said, his platform had limitations. "It's like trying to dry an ocean. We try, but in reality, if customers around the world want images in a certain way, absolutely no one can do anything."

In the past, major charities have used AI-generated imagery as part of their global health communications strategies. In 2023, the Dutch branch of the British organization International plan released a video campaign against child marriage that featured AI-generated images, including a girl with dark eyes, an elderly man, and a pregnant teenager.
Last year, the United Nations published in YouTube a video of “reenactments” of sexual violence during conflict – created with AI – that included an AI-generated testimony from a Burundian woman describing how she was raped by three men and left for dead in 1993, during the country’s civil war. The video was removed after The Guardian contacted the UN for comment.
A UN peacekeeping spokesperson said: “The video in question, which was produced more than a year ago using a rapidly developing tool, has been removed from circulation as we believe it represents an inappropriate use of artificial intelligence and may pose risks to information integrity by mixing real footage with artificially generated near-real content. The United Nations remains committed to supporting victims of sexual violence in conflict, including through innovation and creative advocacy.”
Arnold said the rise in the use of these AI images comes after years of debate in the industry over the ethical use of photographs and the dignified depiction of poverty and violence. "It's assumed that it's easier to use ready-made AI images that don't require consent because they're not real people."
Kate Kardol, a communications consultant at an NGO, said these images scare her and remind her of previous debates about the use of "poverty pornography" in the humanitarian sector.
"It saddens me that the fight for more ethical representation of people experiencing poverty has now extended to the unrealistic," she said.
Generative artificial intelligence tools have long been observed to reproduce — and in some cases exaggerate — widespread social biases. Alenichev said the proliferation of biased images in global health communications could exacerbate this problem, because these images can be circulated across the internet and used to train the next generation of AI models — a process that has been shown to reinforce biases.
A spokesman for International plan-it stated that the organization, starting this year, “has adopted guidelines advising against the use of artificial intelligence to portray certain children,” and said the 2023 campaign had used AI-generated images to protect “the privacy and dignity of real girls.”
Adobe declined to comment. /Telegraph/





















































