fbpx
News

Google says Android apps must allow you to report AI-generated content

Apps with user-generated content are already required to include an in-app reporting function

Google has announced that early next year it will require that AI apps on the Google Play Store include an option to “report or flag offensive AI-generated content.” The user must be able to report the content within the app itself without navigating to some kind of external form or website.

Google also reminded developers that AI apps must still comply with all other developer policies. That means that the AI cannot generate anything Google has deemed “restricted content,” including hate speech, gratuitous violence, terrorist content, sexualization of minors and health misinformation.

Not all apps with AI features will be affected. Google’s AI-Generated Content policy states it is only meant to cover generative AI apps that produce text-to-image, voice-to-image, image-to-image, etc. It excludes “limited scope AI apps at this time,” for which it gives three examples.

The first is apps that use AI to summarize non-AI-generated content. The second is “productivity apps” where AI improves an existing feature, like suggested responses to emails. The third is apps that host AI-generated content, but doesn’t generate it, like a social media app. However, those are still subject to the User Generated Content policy, which does require “an in-app system for reporting objectionable [user-generated content] and users.”

“This is a fast-evolving app category and we appreciate your partnership in helping to maintain a safe experience,” Google’s post said.

Source: Google Via: The Verge

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Articles

Comments