The AI Act is the European Union’s first comprehensive law on the regulation of Artificial Intelligence. It is intended to ensure that AI is used responsibly—for the benefit of society, without jeopardising human rights, security or democratic values.
The law distinguishes between different risk levels for AI systems. Applications such as “social scoring” or comprehensive monitoring are prohibited. Strict rules apply to high risk systems, for example in the justice or healthcare sectors. Low risks—such as chatbots – require transparency, while minimal risks remain largely unregulated.
The AI Act creates clear rules to strengthen trust in new technologies. In this way, Artificial Intelligence should be designed and used in the interests of people—safely, transparently and responsibly.
The AI Act is intended to create trust in Artificial Intelligence, but is controversial. Business representatives warn against overregulation, while human rights organisations criticise the fact that protection against risks such as surveillance does not go far enough. The costs of testing and certification could become a burden for small and medium-sized companies in particular—while large tech companies could overcome these hurdles more easily and benefit from them.
The EU AI Act can be experienced interactively in this exhibition: The chatbot on display was developed in 2024 by the German media project unidigital.news. It is based directly on the official legal text and makes it possible to ask questions about the new AI Act and receive answers in understandable language.