The recent case of the "Ghost of Pristina", which was a fabricated photograph widely shared on social media and published by some media outlets without fact-checking, has brought to light the danger posed by images created by artificial intelligence (AI) and their impact on the public's perception of reality.

This case showed how easily false visual content can be taken as truth and produce massive disinformation.


Regarding this case, Festim Rizanaj, a researcher at the HIBRID platform, and Kastriot Fetahaj, a cybersecurity engineer and software developer for the American company The Daily Wire, spoke to Telegrafi, explaining why AI-generated images are increasingly convincing and why the public often trusts them.

According to Rizanaj, the reliability of AI-generated photos depends largely on the subject they present.

“It depends on the content being published. If the content is related to topics for which there is a lot of visual information online, then the chances of these photos looking original, real and more believable are much higher. However, there are also cases where low-quality content, known as ‘slop AI’, is generated,” he says.

Festim Rizanaj from HIBRID

Fetahaj also emphasizes that technological development has made distinguishing between real and fake increasingly difficult.

"Photos and videos generated by AI can seem very believable, especially to an untrained eye. Technology has advanced significantly and in many cases makes it difficult to distinguish between generated and real content, especially when images are viewed quickly and without verification," he told the Telegraph.

One of the reasons why cases like "The Ghost of Pristina" manage to convince a portion of the public is related to the way people process information.

Rizanaj explains that "the form of communication through images is easier because it does not require much brain engagement to understand, while written text requires concentration and analysis to understand the facts it contains."

Along the same lines, Fetahaj notes the influence of social networks on this behavior.

"Today, it has become a trend to get information more from images and videos than from written articles. Social networks have greatly influenced this behavior, especially fast-paced content platforms like TikTok, where people consume information in a few seconds and rarely stop to read full sources," Fetahaj emphasized to Telegrafi.

Regarding the impact on the perception of real events, researcher from HIBRID, Festim Rizanaj, emphasizes that public trust is often linked to personal beliefs.

"It depends on the case. For example, in political terms, if the content fits the beliefs or attitudes of the person viewing it, then the likelihood of trusting it is greater. Likewise, in cases of certain crises, when images related to the event are published, their credibility can increase," he says.

Meanwhile, cybernetics engineer Kastriot Fetahaj warns that the consequences could be serious.

"These images can cause confusion and distort public perception, because many people do not bother to read the context, source or explanations. Often, what they see is taken as 'first information' and they tend to believe it, even when it is false or manipulated," Fetahaj added.

Both experts agree that regulatory mechanisms are needed, but the approaches differ. Rizanaj emphasizes the necessity of regulation and transparency.

"Yes, there should be clear regulation for AI-generated content, especially on social media platforms, requiring that it be identified and labeled as AI-generated content. In addition, there is a need for public education to recognize and understand the nature of this content," Rizanaj stated.

Kastriot Fetahaj

Meanwhile, Fetahaj estimates that legal restrictions are difficult to implement in practice.

“Implementing legal restrictions is difficult, because even identifying AI-generated images is not always easy and often requires detailed analysis or specialized tools,” he says.

Fetahaj added that a more effective approach could be to prohibit and penalize the dissemination of misinformation (when presented as truth), as well as improve reporting and rapid removal of content from social platforms once it is identified.

Ultimately, the issue of responsibility remains key. Rizanaj emphasizes that responsibility should lie with the page or social media account that publishes the image, underlining the difficulty of identifying the real authors.

"For this reason, legal regulations and oversight mechanisms are necessary to ensure that misleading content is punishable and to protect the public from the harm it may cause," he says.

Meanwhile, Fetahaj adds that the main responsibility should be borne by the person who distributes the image by presenting it as true, especially when this causes harm, emphasizing the need for greater cooperation between security institutions and social platforms.

"Security institutions in Kosovo should cooperate more closely with social media platforms to stop the spread, to request data when there are cases of harm, and to enable the identification of responsible persons. In practice, this is not always working properly, and in many cases institutions have been limited or ineffective in taking concrete actions," said the cyber engineer.

The case of the “Ghost of Pristina” remains a concrete example of how the lack of verification and the power of fake images can lead to disinformation, fueling debate about the responsibility of media, platforms, and the public itself in the age of artificial intelligence. /Telegraph/