
Sora AI Video Generator Exposes Bias Issues
Despite impressive advancements in video quality, AI video generators like OpenAI's Sora are showing persistent biases. A new investigation reveals that Sora perpetuates harmful stereotypes related to gender, race, and disability.
Stereotypes in AI-Generated Videos
The investigation found that Sora often depicts men as pilots, CEOs, and professors, while women are shown as flight attendants, receptionists, and childcare workers. Disabled individuals are consistently portrayed in wheelchairs, and the system struggles with interracial relationships. The lack of diversity and the reinforcement of traditional roles raise concerns about the potential impact of AI-generated content.
OpenAI acknowledges the issue of bias and claims to have teams dedicated to reducing it. They suggest adjusting training data and user prompts to generate less biased videos. However, the company has not provided specific details on their methods.
The Root of the Problem
Bias in generative AI stems from the large amounts of training data, which often reflects existing societal biases. Developers' choices during content moderation can further reinforce these biases. Research suggests that AI systems can even amplify human biases.
Potential Harm and Mitigation
The widespread use of AI video in advertising and marketing could exacerbate the stereotyping and erasure of marginalized groups. AI-generated videos used for training security or military systems could lead to even more dangerous outcomes. Addressing AI bias requires a multidisciplinary approach involving diverse perspectives and real-world testing, not just technical solutions.
Source: Wired