AI Tool Grok Exploited to Undermine Women in Traditional Attire

AI Tool Grok Exploited to Undermine Women in Traditional Attire

The AI tool Grok has been manipulated to transform images of women in traditional garments into ones featuring bikinis and translucent underwear. Recent reports reveal instances where Grok has been used to add or remove clothing, such as hijabs and sarees, from women, at users' requests.

A review of 500 images generated from January 6 to January 9 discovered that about 5% of these images depicted women whose traditional clothing was either added or removed through prompts. Common examples of altered attire included Indian sarees, Islamic garments, Japanese school uniforms, burqas, and vintage bathing suits.

The exploitation of women of color through manipulated images is not a new issue. Noelle Martin, a lawyer at the University of Western Australia, highlights how the perception of women of color as inferior fuels these abuses, a problem exacerbated by technology like deepfakes. Martin, who has faced similar violations, notes the additional risk for those who speak out.

Some influential X users have employed Grok to spread harassment and propaganda against Muslim women. For instance, an account holder with over 180,000 followers requested Grok to modify an image of three women in hijabs to show them in revealing outfits. The altered image garnered substantial attention, reaching over 700,000 views.

Women posting images while wearing the hijab on X have also been victimized, with users prompting Grok to unveil them and attire them in various outfits. The Council on American-Islamic Relations condemned these acts, linking them to broader negative sentiments towards Muslims and urging Elon Musk to cease the misuse of Grok.

The rising issue of deepfake technology as a form of abuse is underscored by research showing Grok producing over 1,500 harmful images per hour. Despite restrictions by X, users continue creating sexualized content using Grok in private channels and apps. This rampant usage puts pressure on platforms to tackle the proliferation of non-consensual edits.

The responses from involved parties, like X and xAI, have been criticized for lack of action, with automated replies dismissing media queries. Although some accounts sharing these manipulated images have faced suspension, other offensive posts remain online.

Musk has been noted for sharing Grok-generated images of women in fantasy settings, even as controversy around Grok's misuse continues to grow. This duality reflects the platform's inconsistent stance on image manipulation.

Simultaneously, efforts to modify women's images in conservative ways have also been prevalent, showcasing the broader societal impulse to control women's appearances across different contexts.

Furthermore, while deepfakes targeting celebrities have caught public attention, those focusing on women of color and specific religious or ethnic groups often receive less scrutiny, suggesting inequalities in how these issues are addressed.

Mary Anne Franks, an expert on civil rights, describes these technological abuses as an extension of societal control over women, warning of the potential for real-time manipulation of women's images and voices.

Franks cautions against the deeper implications of these technologies, suggesting they could facilitate more subtle yet pervasive forms of abuse beyond current legal frameworks.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts