Whats Next For AI: 5 Lessons Learned From The Festival Of The Future

Nina Brandtner, Juan Esteban Naupari, Melanie Walter

Experts from the fields of technology, science and art came together at the 1E9 Festival of the Future, the “think tank for the future”, at the Deutsches Museum in Munich. True to the motto “Prompts to Collective Intelligence”, AI was the topic of the hour. We have summarized the five most important learnings for you.

1) “AI can only be regulated after the fact” Thomas Sattelberger

To regulate or not to regulate? This question comes up on the stages of the Festival of the Future almost as often as the keyword AI itself. The majority of experts agree: how should we regulate something whose possibilities and risks cannot yet be properly assessed? Marco Alexander Breit from the Federal Ministry of Economics summarizes: “For the first time in history, something is to be regulated that neither politicians nor developers really understand what it is and how it works.”

Thomas Sattelberger, former Parliamentary State Secretary, admires the approach in the UK: “They base the regulation of AI on existing laws and adapt retrospectively as the technology develops. They have a learning curve.” It is precisely such differences in regulation that could set Germany and Europe back in the international AI race, the panelists fear. “AI is global, regulation is only local. In the next few years, there will be the AI Act in the EU, while the rest of the world does what it wants,” criticizes Breit.

2) “Artificial intelligence must be democratized” Björn Ommer

Björn Ommer, inventor of the image AI Stable Diffusion, is concerned that Germany could become dependent on other countries and their resources if it is over-regulated. “We are in the fortunate position that we can still talk about regulation at the moment, as we are not yet completely dependent on one country/company. AI must therefore be democratized.” For him, this means that open source models that everyone has access to should be the basis for the further development of artificial intelligence. This is the only way to guarantee independence from resources. Large amounts of data, which often only tech giants such as Google have access to, would also be very valuable for science.

3) “Small language models could be the solution” Dr. Leif-Nissen Lundbaek

Large language models such as ChatGPT are trained with an extremely large amount of data and are used for a variety of tasks. Although they are currently at the center of the hype surrounding generative AI, they also bring problems with them: high costs, data protection concerns, hallucinations and energy requirements are the challenges that Leif-Nissen Lundbaek lists. “Using large language models for all use cases makes no sense at all,” he says. He sees great opportunities in the use of small language models, which are better tailored to the needs of users and therefore work much more efficiently. Björn Ommer also emphasizes: “It's not about creating even larger large language models, but about the capabilities of the individual models.”

4) “It's hard to understand how the machine thinks” Jonas Andrulis

Many people are already experimenting with artificial intelligence, but few really understand how a chatbot comes up with an answer. Jonas Andrulis from Aleph Alpha therefore advises asking large language models to help model their answer-finding process. “It's relatively easy to show how the AI works in principle,” says Björn Ommer, ”but how the AI's decisions are made is the complicated part. But it is absolutely essential to know how the black box of AI works.” After all, users can only use the technology responsibly if they have a certain understanding of its possibilities and limitations. According to Ommer, it is important that users take their own responsibility and cross-check content: “The next generation and the current generation must be taught how to use AI. Awareness of the dangers is needed.”

5) “Being afraid of the possibilities is counterproductive” Naureen Mahmood

A disruptive technology such as artificial intelligence comes with risks that should not be underestimated, as everyone at the Festival of the Future agrees. Nevertheless, it should not be met with panic - because as is so often the case, it is the human behind the machine that could turn the technology into a danger, says Marco Alexander Breit. “A nuclear bomb is not comparable to artificial intelligence - but nuclear technology in itself is. It can be used for both positive and negative things.” Naureen Mahmood from Meshcapade sees the opportunities of the technology: “It can be trained to detect problems or errors.” And Björn Ommer adds: “Generative AI helps with the problem that we are drowning in information but can't draw any real knowledge from it.”

»We need a lot of data, hardware and, above all, talented people to further develop this technology and stay on the ball.«

Tim Wirtz, Fraunhofer IAIS

»Generative AI is a key enabling technology, similar to electricity or the first personal computer.«

Björn Ommer, LMU & Developer Stable Diffusion

»There is often a difference between the things we see on the web that are theoretically possible and the things we can actually implement in our processes. It's a journey that has only just begun.«

Jonas Andrulis, CEO and founder of Aleph Alpha

More from XPLR: MEDIA in your e-mail inbox

The newsletter from XPLR: MEDIA in Bavaria shows you how innovative Bavaria is as a media location. Join the innovators!