by Karl Furlong

In 2014 The Los Angeles Times created an algorithm to generate a story after an earthquake. The tool, Quakebot, brought AI into the company’s newsroom. By pulling data from the US Geological Survey and other trusted sources, the computer algorithm is quickly able to generate a story and send it through to an editor for review faster than any human could.

Speed, as well as the formulaic nature of certain types of reporting, has enabled other news outlets to also embrace AI. The AP, for example, uses it to create over 40,000 stories per year.

The AP and the LA Times are not the first newsrooms to incorporate AI, and they won’t be the last. In a 2019 global survey on the future of AI, the London School of Economics (LSE) reported AI “has the potential for wide-ranging and profound influence on how journalism is made and consumed.”

At the same time the use of AI and the algorithms that power them are infiltrating the everyday life of internet users through social media platforms, eCommerce websites and the content we watch on Netflix or the music we listen to on Spotify. Social media platforms use these algorithms to determine what ads to serve the user, what news to present to the user, and in what order that news will be presented.

Defined by the encyclopedia Britannica as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings, AI has grown significantly since the LA Times introduced Quakebot in 2014. From the streaming of movies and songs to the sale of online goods, to the advertisements users are presented with, AI’s presence is now ubiquitous.

Against this backdrop, on day two of UNESCO’s World Press Freedom Day (WPFD) conference in Uruguay, a panel of experts debated the benefits and dangers of AI and its impact on freedom of expression.

In his open remarks, Fabrizio Scrollini, the Executive Director of the Open Data Latin American Initiative said he felt that AI was a tool that “enhanced freedom of expression.” Scrollini pointed out that AI is being used in small newsrooms in developing countries to assist with tasks previously beyond their capabilities. AI has enhanced the analytical abilities of these newsrooms to “make sense of large sets of documents that wouldn’t be possible to actually understand without the help of AI.” It can do this by scanning data to look for patterns or anomalies.

As Google has explained, “a wide array of media organizations – including Bloomberg, The Washington Post, and the Associated Press –  have started to deploy different AI and machine learning techniques to automatically produce news stories at scale. The main goal is to allow journalists to focus on the most creative aspects of their job, leaving repetitive tasks to the machine.”

And while there are clear business benefits associated with this technology, as the UNESCO panel discussed, there are also concerns. Do the algorithms implemented by online platforms and social media companies allow the user to freely choose? Do these platforms allow the user freedom of expression?

Scrollini believes that these tools have “the power to enhance the freedom of expression and enhance the freedom of the press.”

Rodica Ciochina, a programme officer with the Internet Governance Unit with the Council of Europe, believes that while AI is important it is clear that there need to be guardrails put in place to ensure voices aren’t drowned out. Ciochina says the concern is that large social media companies and news platforms will control the online narrative and drown out local and individual voices. AI lies at the heart of how many of those operations work.

Ciochina stated that news organizations can become reliant on these tools as the pace of the news cycle speeds up and they are “compelled to keep up.” Ciochina added that the need for speed, could lead “to the loss of control of curation and takes energy away from fact checking and debunking mis and disinformation.”

Ciochina explained that freedom of expression is limited when social media organizations and newsrooms use AI to curate news articles and feed users limited stories that will impact their views and opinions.

Charlie Beckett is the director of the Journalism AI Project at LSE, and he believes this is a narrow view of how newsrooms generally deploy AI. “As the journalist, you are responsible for what you publish,” Beckett said.

AI does not limit the scope of the news stories published, on the contrary, Beckett said AI allows newsrooms to gather and aggregate large amounts of data to improve their reporting. Beckett explained that most newsrooms don’t have the IT resources to implement overly complicated AI systems. Newsrooms use AI to help journalists write better stories, not to replace them.

Rumman Chowdhury of Twitter believes that while freedom of expression is important, a large organization like Twitter needs to balance that with other freedoms, such as the freedom from harassment and other toxic behaviors.

Chowdhury went on to explain that while the responsible use of AI is important, it is critical that we don’t lose sight of the fact that “there is no world in which there isn’t some sort of curation happening.” Smaller websites use human curation and have editors determining which material to serve their audience (as newspapers, broadcast news and radio bulletins always have), while larger platforms like Twitter and YouTube rely on machine learning due to the sheer size of their operation and the volume of content found on them.

While the panelists may not have agreed on whether regulations are needed for AI to ensure the freedom of expression, there was consensus amongst the group that regulation of one form or another was inevitable.

In Scrollini’s closing remarks he referred to the launch of his organization’s global data barometer, “a tool that measures governance availability, use, and impact of data in 110 countries across the world.”

One key dimension to this debate is the need for transparency about how AI is being used. Outlets like AP, for example, label their content that has been produced by AI, so it’s clear to readers that this is a piece of robo-journalism. At the same time, there’s also an on-going need to educate consumers about how algorithms work, and its implications for what we read, watch and listen to.

A 2020 study of 130 news organizations by the Knight Foundation found that the organizations mainly use AI to help with newsgathering and not automatic story generation. The main use of AI is to “comb through large document dumps with machine learning, detect breaking news events in social media, and scrape Covid-19 data from government websites.”

Of the projects studied by the Knight Foundation only 9% involved algorithmic curation and all of these were with large news organizations. The cost and complexity of implementing these algorithms make is impractical for cost conscious small newsrooms.

Beckett agrees, when asked whether he sees algorithms as the main concern surrounding AI, he said “there aren’t any plugin tools, and it doesn’t enhance news gathering, presenting, and sourcing.”

Alfonso Peralta Gutiérrez is a judge in the Criminal Investigation Court of Spain and he believes that it is important to understand not just the impacts of these algorithms, but also their limitations. “We must not be frightened of AI. We must be trained about AI.”

###