News

Google Revives Controversial AI-Generated Human Images with Stricter Guidelines

After halting its AI-generated image feature earlier this year, Google is bringing back the capability to create AI-generated images of people through its Gemini AI suite. The feature had previously faced sharp criticism for producing controversial and historically inaccurate depictions, prompting the company to pause the tool in February. Now, in a move aimed at restoring trust while balancing innovation, Google has announced that this feature will soon be available again—but with limits.

The rollout will be limited to customers using the Gemini Advanced, Business, and Enterprise tiers, as per an announcement made by Dave Citron, Google’s senior director of product management for Gemini Experiences. Similar to other AI tools like DALL-E and Stable Diffusion, the updated feature allows users to generate images using text prompts. However, it arrives with notable restrictions, following several months of technical refinement and ethical red-teaming.

Controversy sparked when the AI produced images that critics—including Elon Musk—labeled "historically inaccurate" and "biased." One viral example included an image of Black individuals dressed in World War II-era German uniforms, while another depicted a female pope. These depictions led to widespread public backlash, forcing Google to hit pause and re-evaluate the AI's image-generation capabilities.

In response, CEO Sundar Pichai issued an internal memo acknowledging the issue, stating that the errors were “unacceptable” and had offended users. Google’s stock experienced a temporary dip during the controversy as the company scrambled to address the criticism.

Returning with Key Restrictions

As part of its cautious revival, the updated feature won’t generate photorealistic images of identifiable people, minors, or inappropriate content involving violence, gore, or sexual depictions. The system has been designed to prevent the creation of offensive material and now includes additional safety checks to manage output. Google admitted that while the safeguards are much improved, there may still be occasional missteps, though the company claims it has made "significant progress" in ensuring responsible AI usage.

The company has also shared examples of the AI’s image-generation capabilities in its latest blog post—though none of these examples included human figures. Instead, the post features innocuous, visually captivating creations, such as a tiny dragon hatching in a meadow, intended to highlight the creative possibilities of the tool without stirring controversy.

The return of this feature will be closely monitored, but Google’s cautious approach underscores the balancing act many tech companies face as they navigate the growing public demand for responsible AI, while pushing the boundaries of what these tools can accomplish.