Neuraforge AI: How Does a Tool Recognize AI Traces?

By Martin Haase
8

Photo: Neuraforge AI

How can media professionals recognize AI-generated or AI-altered material? Anika Gruner and Anatol Maier have the answer. With their start-up Neuraforge AI, they are developing software that can check whether artificial intelligence has been used in the creation of content. They have been part of Media Lab Bayern's start-up funding program since April 2024. In this interview, they reveal what makes their concept future-proof and how their software is intended to support media professionals.
Anika, Anatol, what challenges does the media industry face due to the flood of AI-generated material?

Anika: We see a growing discrepancy in the perception of media authenticity among users. They can no longer easily distinguish the content of established media from fake content. This is especially true in social media feeds, where the sheer volume of information is overwhelming. It is too time-consuming for users to verify all the content. I also see this as a threat to the democratic mission of the media.

AI-generated images can amplify emotions

Anatol: AI-generated images can, for example, amplify certain emotions that support the text. It's not about objectively reflecting the truth, but about telling a story. Even if AI-generated images are not intentionally designed to convey certain feelings, they can still produce them and give the text a different direction. For instance, if a photographer in a war zone takes a picture of a crying child, he can easily reduce the visual distance between the child and soldiers in the background. This can make a huge emotional difference to the reader. This doesn’t require technical knowledge of image editing; current smartphones with AI integration can do this.

Anika: In media houses, entire investigative teams are deployed to verify the authenticity of content that is provided to them. However, there is still no technical cross-checking. If, despite all the verification procedures, established media fall for a fake and publish it, the damage due to loss of reputation is significant. Even with a correction, outcries from people using terms like "fake news" to harm established media often resonate more with users than the correction itself.

 

»It is hardly possible to recognize AI-generated content with the naked eye anymore.«

Anika Gruner

Photo: Neuraforge AI

How can you recognize deepfakes and altered image and video material?

Anatol: Visually, it's often difficult: the more these contents are shared or sent via social media, the more the file is compressed and the lower the quality. In very low-resolution images and videos, features that would reveal a fake in high quality are much less noticeable.

Additionally, we need to distinguish between two scenarios. First: Is it a fake where existing content has been manipulated? Manipulations often show many digitally edited movements in the facial area, which are visually noticeable, such as mouth movements. Or second: Is it completely AI-generated material? Here, the software still has issues with creating people. Especially symmetries in different parts of the image lead to inconsistencies, for example, it's almost impossible to generate two equally sized earrings. A classic clue is also the light source, where a single light source creates unnatural shadows on multiple people. Another indicator is frayed pupils, which in nature form an almost perfect circle. Developers of AI systems are aware of these problems and train their software accordingly, which is why these so-called semantic or physical traces are steadily decreasing over time.

Anika: We are in a transition phase. After manual post-processing – as already happens with virtual influencers – it's almost impossible to recognize AI-generated content with the naked eye. Therefore, we believe that now is the right time for a technological solution to help with verification.

 

»Artificial intelligence leaves a kind of fingerprint in the background noise of an image.«

Anatol Maier

Photo: Neuraforge AI

How does your software, developed by Neuraforge AI, approach analysis?

Anatol: There are basically two approaches for AI detectors. One is the examination for semantic traces, which are visually recognizable changes. Algorithms can be developed to check for inconsistencies. The second focuses on statistical features, which is our emphasis. Artificial intelligence leaves a kind of fingerprint in the background noise of an image. This trace is hard to remove from the AI models without completely degrading the desired image. If manufacturers wish, they can deliberately add an additional element to this fingerprint, which they already leave behind, to act as a kind of invisible watermark.

Our software detects this fingerprint and explains how and why it was created. We aim to provide as much information as possible without judgment. What someone does with this analysis is up to them. AI-generated content is not inherently bad. AI-generated mood images, for example, can be useful for journalists.

What is the status of your tool's development?

Anatol: We are currently working on a research prototype, for which we are developing software for users. The biggest challenge for AI detectors that work with semantic traces is covering areas that have never been seen during training. AI never says, "I don't know" – which is why hallucinations occur. With our approach, we have a hit rate of over 98 percent and can maintain this even in unknown areas. We add metrics that indicate when the system cannot make a reliable statement. This helps prevent our software from hallucinating. This is important, for example, when completely new AI models come onto the market. Then we need to test and optimize our algorithms. There won't be a "one size fits all" solution; we will always need to adapt as technology advances.

Diverse applications system for editorial software and as a web solution

How can your system be integrated into the work of media professionals?

Anika: There are two options. On the one hand, we are in talks with manufacturers of all kinds of archive or editorial software to create an automated tagging system for these systems. We need to understand how the data is processed to provide a solution that scans the data as soon as it arrives in the system. On the other hand, we are working on a web solution that allows journalists to analyze individual images. This means you log in online, upload an image, and receive a verification report.

How does the Media Lab Bayern help you with this?

Anika: We have the expertise in software development ourselves, but we are not very familiar with the start-up business. Here, the support from the Media Lab is extremely valuable. We benefit from the environment and contacts, and gain inspiration. Successfully pitching our idea and receiving funding gives us additional motivation to take the risk and drive this new company forward full-time.

More from XPLR: MEDIA in your e-mail inbox

 

The XPLR: MEDIA in Bavaria newsletter shows you how innovative Bavaria is as a media location. Join the innovators!