Prof. Dr Nele Hansen is Professor of Media Management at IU International University of Applied Sciences. In this interview, she discusses important study findings, addresses current challenges and provides insights into the background of topics including trust in the media, opinion shaping and disinformation.

Just over half of respondents say they regularly check facts. That surprised me. Other studies show much lower figures. It could be that many people assess themselves more positively than they actually act.
It is also interesting to note that over a third are familiar with the term “deepfakes”. For such a technical topic, that’s not bad at all and shows that plenty of people are engaged with the subject. Younger people in particular seem to be well informed in this area, which is not surprising given that they have grown up with these technologies.
Most people agree that regulations on artificial intelligence are necessary. It is particularly salient that responsibility is clearly seen as being with the platforms. Over 80% are even in favour of banning deepfakes – that is a clear message.
False reports have been around ever since media exists. However, with fake news and deepfakes, disinformation is reaching new heights in terms of speed, reach and potential to deceive. Distinguishing between unintentional misinformation and deliberate disinformation is crucial. Today, media literacy means not only checking sources, but also critically evaluating content and identifying digital manipulation techniques. Media literacy is more important than ever, as it is a prerequisite for informed decision-making and democratic participation.
Almost every day, incidents come to light in which the reputation of public figures, in particular, is damaged. For example, before this year’s federal elections there was a surge in disinformation campaigns deliberately orchestrated by authoritarian states to strengthen extreme political views in Germany, and to attack democratic parties. Major news agencies such as Reuters reported on this. Becoming more common: the sharing of AI-generated images of events that never happened. The increasingly realistic rendering of these images makes it more and more difficult to distinguish genuine images from fakes.
Disinformation attacks the very foundations of our democracy. It often has only one goal: to polarise and divide society. When people are systematically confronted with false or manipulative information, it undermines their trust in the media, institutions and democratic processes. When we no longer know what is true, we cannot make informed decisions – neither politically nor socially. Disinformation also diverts attention away from real problems or important issues.
The study results confirm many of the previous findings on dealing with disinformation: awareness of this issue is generally high. In particular, there is widespread concern among the general public that fake news and deepfakes could pose a potential threat to democracy.
And the results show that there is still room for improvement in terms of knowledge and how it is handled – which is hardly surprising given the technical complexity and rapid developments.
The sheer volume of information we consume every day overwhelms many people. This is exactly where the challenge lies: information often has an immediate and emotional effect on us. But not everything that you see is reliable. In a digital environment where anyone can publish content, critical thinking, checking sources and a basic understanding of how platforms work and content is distributed are more important than ever.
In the long term, we need a kind of digital maturity so that we no longer consume information passively, but more actively and consciously – and question it.
Artificial intelligence is increasingly replacing traditional search logic with its own AI-based mechanisms, which filter information and evaluate it independently. This can result in “hallucinations”, i.e. plausible-sounding but false content being conveyed. Combined with the enormous speed at which information spreads, this results in uncontrolled dynamics in which false information is almost impossible to verify or contain manually.

Both are true. Young people have grown up with digital technologies and feel completely at home on social networks and platforms such as TikTok and YouTube. This seems to help them develop an early sense of how content works online – including how manipulative or misleading it can be.
In addition, media usage differs significantly between generations. As our study shows, older people tend to use traditional media such as television or print media more often, while younger people mainly obtain their information online. As a result, young people are more exposed to disinformation in digital spaces and may react to it in a more sensitive or vigilant manner.
The flood of information we are confronted with every day soon results in us switching off mentally and only skimming over content. This is considerably different to previous media consumption, where content, for example in newspapers and on television, was absorbed more slowly and in a more targeted manner. Another crucial point is that fake news often looks very professional. On social media platforms, posts often appear reputable at first glance thanks to well-designed graphics, logos and an overall professional appearance.
Emotions fuel disinformation. When something touches us emotionally, it becomes deeply ingrained – regardless of whether it’s true or not. Therefore, social media is not necessarily driven by facts, but primarily by content that triggers outrage, fear or anger and thereby garners maximum attention.
Social media rewards reach rather than quality – and this is primarily determined by engagement. Posts that are frequently liked, shared or commented on are made more visible by the algorithm.
Content that triggers strong emotions spreads faster and further than factually based information. And that means a huge increase in the amount of false information out there.
The European External Action Service (EEAS) reports to the current High Representative of the EU for Foreign Affairs and Security Policy, currently Kaja Kallas. On its website EUvsDisinfo.eu, the EEAS lists current cases of disinformation and refutes false claims – particularly in connection with current conflicts involving authoritarian states that deliberately spread false information. The federal government, state governments and NGOs in Germany do a lot of educational work through websites, projects, etc. That’s all well and good. In the long term, however, only the targeted promotion of media literacy in schools, educational establishments and social institutions will offer effective protection against destructive information. We must not forget that it is not only the younger generation that is affected, but all generations.
With every major technological development, we initially experience a familiar pattern: fear, uncertainty and often an overwhelming feeling. This was also the case in the past, when the first trains started running and people believed that speeds above 30 km/h were deadly. We laugh about it today, but it shows that getting to grips with new technologies takes time, experience and rules.
We are also at the beginning of a phase of social learning when it comes to AI-generated content, such as deepfakes. Over the next five years, the way this is handled is likely to become more professional. We will develop better detection mechanisms and establish legal standards. The only difference from previous developments is the speed. When it comes to AI, everything seems to be happening faster than ever. This makes it difficult to make accurate predictions.
Firstly, these are, of course, the conclusions drawn from the study results. The majority of people living in Germany seem to be well informed about the fundamental issues. To understand this, you need to know that in the past, people were primarily passive consumers of television or print media. Today, they produce their own content, networking and organising themselves, and thereby participating in political discourse. If people in Germany remain informed and reflective, this will create a great opportunity for increased participation in the democratic process. This gives me hope.
That said, the results also show that people are concerned about developments in AI. This is understandable given the rapid developments in this area. And this is where the great danger lies. Politicians are making great efforts, as demonstrated, for example, by the EU regulation on artificial intelligence (AI Act for short). It is the first piece of legislation in the world to comprehensively regulate AI. However, politics always seems to be one step behind – not because of a lack of political will, but because technology is advancing faster than it can be regulated.