Journalism is facing significant changes and challenges due to artificial intelligence, particularly with the rapid rise of disinformation. AI has accelerated both the speed and complexity of fake content, making it increasingly difficult for individuals and media organisations to verify information.
Following the IQ Media Hub conference in Athens, where experts discussed how AI can be both a risk and a tool in the fight against disinformation, Izlen Tayfur interviewed FT Strategies Manager Aliya Itzkowitz to gather more insights on the impact of AI on journalism and potential solutions. The interview was originally published on Medium.
The topic is AI and disinformation. What are your thoughts on this topic, and can we evaluate the fight against disinformation as pre-AI and post-AI? What has changed, and what still needs to change?
Great question. Disinformation is not a new problem; it has existed for as long as communication itself. But what’s different now with generative AI is the scale and speed at which disinformation can be created and distributed. The last two years have introduced a clear ‘before and after’ when it comes to generative models. As LLMs continue to improve, it becomes increasingly difficult to distinguish between synthetic and real content, especially in images and video, which have become significantly more convincing in recent months. That said, AI is also part of the solution. Some publishers are now using synthetic media to train detection tools. For example, I worked with a publisher in Spain, El País, that built a tool using AI to identify synthetic audio. I prefer the term ‘synthetic’ to ‘fake’ because it’s more precise.
How can media professionals and fact-checkers adapt to this new landscape where AI is used to create convincing false narratives at scale?
We’ve talked about this a lot with my colleagues, and one of them is actually building a framework around it. The first step is avoiding misinformation in the first place, though that’s becoming harder. Mistakes will happen. What’s key is having a rigorous fact-checking process to reduce the chances of using false information, which can damage brand trust. And when mistakes do happen, it’s crucial to be transparent — audiences, especially younger ones, value honesty. BBC Verify is a good example of opening up the fact-checking process. Schibsted in the Nordics has created ‘ethics boxes’ that show editorial decision-making more transparently. Another aspect is official certifications like C2PA, though adoption is a challenge. And finally, we should be cautious that in correcting disinformation, we don’t unintentionally amplify it — ‘prebunking’ tactics can be helpful here.
Do you think current regulations and platform policies are sufficient to address AI-driven disinformation? If not, what is missing?
That’s a tricky one because regulations are country-specific, but the internet is global. The EU, for example, is more conservative than the US or parts of Asia, and that makes it harder to regulate AI effectively on a global scale. Also, regulations are often delayed. By the time something is regulated, the problem might have evolved or worsened. So while regulation is part of the solution, it can’t be the only one. We need action from publishers and organisations on the ground. I also believe in education, especially media literacy in schools, because younger generations are AI natives. They need to be taught how to differentiate quality information from misinformation.
At this point, what do you think is the right approach to AI? Should we avoid it, adapt to it, or take a pragmatic approach?
I definitely support a pragmatic approach. Avoiding AI is not an option anymore. News organisations have come to terms with the fact that their content is being used to train LLMs. So instead of resisting, we need to engage with the technology. At FT Strategies, we’ve worked with the Google News Initiative on AI programs focused on two things: improving what we already do and exploring what new things we can do with AI. Tasks like headline generation and article summarisation are becoming business as usual. The next challenge is finding truly transformative use cases. That’s where the industry still has a way to go.
What are your and your company’s policies regarding AI and disinformation?
We have a cautious yet proactive stance. Protecting our content and IP is crucial, but we’re also exploring AI-powered audience-facing tools. One example is ‘Ask FT’, our generative search bot for B2B readers. It’s part of our effort to engage responsibly while leveraging AI’s potential.
What are your predictions for the near future regarding AI and disinformation?
I don’t have a crystal ball, but I think we’ll see more agentic and hyper-personalised news delivery. Imagine a news agent that knows everything about you, your location, interests, background, and delivers five things you need to know every morning. That’s exciting but also scary for legacy media: will users still recognise our brands and click through? On the misinformation side, last year saw major elections globally, and while misinformation existed, there wasn’t a catastrophic event. That makes it harder to convince people it’s a serious issue. The challenge is maintaining urgency and not letting the problem fade into the background. On the bright side, with a flood of low-quality ‘beige’ content, quality journalism might become more valuable than ever.
At FT Strategies, we have a deep knowledge of AI, technology & data, and what you need to future-proof your business. If you would like to learn more about our expertise and how we can help your company grow, please get in touch.
About the author

Aliya Itzkowitz is a Manager at FT Strategies, where she has worked with over 20 news companies worldwide. She previously worked at Dataminr, brining A.I. technology to newsrooms and at Bloomberg, as a journalist. She has a BA from Harvard University and an MBA from Said Business School, University of Oxford. She is currently a member of the FT's Next Generation Board.