Meta AI shift

Meta to use AI for product risk assessment

Technology

Meta is reportedly planning a significant shift in its product risk assessment process. Instead of relying primarily on human reviewers, the company is increasingly turning to artificial intelligence to expedite the evaluation of potential harms associated with new features and updates across its platforms, including Instagram and WhatsApp.

Internal documents suggest that Meta aims for AI to handle up to 90 percent of these assessments. This includes areas previously overseen by human teams, such as youth risk and content integrity (encompassing violent content and misinformation).

Concerns Regarding AI Oversight

However, this transition has raised concerns among current and former employees. They express apprehension that AI might overlook critical risks readily identifiable by human reviewers. The potential for increased negative externalities resulting from less rigorous scrutiny is a significant worry.

While Meta maintains it will continue to use human expertise for complex issues, the increased reliance on AI for "low-risk decisions" raises questions about the robustness of this approach. The speed and efficiency offered by AI are undeniable, but the trade-off in terms of potential oversight remains a point of contention.

Recent integrity reports from Meta reveal a decrease in content takedowns following policy changes, alongside a slight increase in bullying, harassment, and violent content. This data further fuels the discussion around the potential consequences of shifting to a more AI-centric review system.

1 Image of Meta AI shift:
imageMeta AI shift

Source: Engadget