Lisa Priller-Gebhardt
Marie Kilg – journalist and DW innovation manager.© David-Pierce Brill
Ever since ChatGPT opened the world’s eyes to the capabilities of artificial intelligence, the use of AI models in journalism has been the subject of intense discussion in Germany too. Marie Kilg is a journalist and innovation manager at Deutsche Welle (DW). She explains what the technology is and is not capable of, what risks it entails and whether it will have a detrimental effect on journalism in the long term.
Ms Kilg, where is AI already being used in everyday editorial work?
Let me start by making it clear: AI comes in all kinds of forms. Automated journalism has been around for some time, used for example in reporting on the stock exchange and sports. Generative AI tools, which have now achieved greater relevance due to ChatGPT, can do more, for example summarising texts, writing articles and generating images. The German daily newspaper taz currently features a monthly column with generative texts by a fictitious AI writer that I developed with a team. Another example, which is not something one would exactly encounter every day: In the ‘Münchner Runde’ chat show broadcast by Bayerischer Rundfunk, ChatGPT enabled a robot named Pepper to take part in a discussion with the studio audience.
What are the advantages for editors of using AI?
For one thing, the AI can do jobs that are not particularly inspiring or are time-consuming. It can help us transcribe interview files, for instance. So this no longer needs to be done by hand. Leaving more time for research. And for another thing, ChatGPT can help create scripts and texts based on bullet points. This not only makes us quicker, but also helps some people overcome writer’s block by providing them with a text that only needs to be adapted to individual needs. In this sense, AI becomes kind of a virtual sparring partner.
The AI software cannot distinguish between fact and fiction. What risks does that entail?
As long as one adheres to journalistic standards, AI is no more dangerous than a Google search or Wikipedia research. The only risk comes when people use the technology without understanding what it is they are actually working with and where the boundaries lie. And yes, AI can also be used for malicious purposes. The buzzword in this context being fake news. We have to take steps to combat this.
Who is liable actually if AI makes a mistake?
The technology is developing so rapidly that legislation is struggling to keep up. In AI ethics and the media, most people agree that humans should not relinquish responsibility. The news agency dpa for instance recently issued guidelines for dealing with AI, which stipulate among other things that AI should only be used “under human supervision”.
Will it put journalists out of work in the long term?
Jobs are changing all the time, and this also applies to journalism. Who would have thought just a few years ago that a TikTok creator could be a job in journalism? However, AI will change the way journalists work to an even greater extent. But the good news is: journalism as the basic model will continue to exist. We will still need people who take the time to summarise complex matters in such a way that others can understand them and form an opinion on this basis, thereby enabling them to help actively shape society.
Courtesy: Deutschland
https://www.deutschland.de/en/topic/culture/ai-and-media-in-germany-journalism-and-chatgpt
The views expressed in this article are the author’s own and do not necessarily reflect Coverpage’s editorial stance.