The proliferation of AI-written or AI-assisted news articles has accelerated since the mainstream adoption of AI chatbot tools like OpenAI’s ChatGPT and Microsoft’s Bing AI.
CNET was reported to be using AI tools to generate content, while Insider has announced that it will use AI to improve its stories.
While these are still dismissible, considering that the content published by big publications would inevitably be cross-checked and edited, small, low-quality ‘content farms’ have now started popping up that are using AI to churn out news articles, many of which are publishing false and misleading information, as shared by Gizmodo.
Gizmodo talked to Arjun Narayan, the Head of Trust and Safety for SmartNews, a news discovery app available in more than 150 countries. Narayan has years of experience in the trust and safety field, having worked for companies like ByteDance and Google.
According to Narayan, there are a few benefits and risks of using AI for news writing. He also talked about the ethical and social implications of using AI for media and the best practices for building trust and transparency with readers.
Narayan said the backbone for an AI-written content future is to ensure that AI systems are trained correctly and with the truth. He said that it is important to realize that AI models tend to make up information, and the industry term for it is “hallucination.” In cases where an AI model would hallucinate, it is important to train it to instead say that “I don’t have enough data, I don’t know.”
It can also introduce bias into the content, and writers would need to check and curate whatever comes out of the AI system for accuracy, fairness and balance.
Narayan discussed the transparency and accountability of AI written content. AI can blur the lines between human and machine-written stories. Writers need to disclose when their content is partially or fully generated by AI.
Further, the technology has the potential to take over human-held jobs and cause disruption. The speed at which AI technology is being adopted doesn’t give governments and regulators much time to prepare and have guardrails in place.
Narayan also sided with Google AI veteran Geoffrey Hinton who recently came out and warned about the dangers of AI and expressed regret over his life’s work. Read more about it here.
Read the complete Gizmodo interview with Narayan here.
Image credit: Shuterstock