Google Discontinues AI Amid Diversity Controversy: What’s Wrong With The Faces

Google Discontinues AI Amid Diversity Controversy What’s Wrong With The Faces

“What’s wrong with the faces of women and people of color?” Google discontinues AI amid diversity controversy. As Google’s artificial intelligence (AI) model Gemini became embroiled in controversy over diversity, its image creation function was temporarily suspended.

Google announced on the 21st (local time), “We are discontinuing Gemini’s image creation service.” Gemini is a multimodal-based AI model that generates text, images, voice, and video. This suspension came 20 days after announcing on the 1st that the image creation function would be added.

Recently, criticism has been raised on social media that Gemini is overly whitening its images of women and people of color. In particular, some images were criticized as ‘historical distortion’ by depicting them as if they were Viking kings or Nazi German soldiers during World War II.

“Image generation doesn’t fit every situation,” said Jack Krawczyk, head of product at Google Gemini. He added, “We will temporarily suspend the service and further develop this feature, releasing an improved version to suit a variety of situations.”

The Financial Times (FT) said of the Gemini outage: “Generative AI models have a problem with ‘hallucinating’ or manipulating names, dates, and numbers, which is when the software discovers patterns and selects the best next option in a sequence (paragraph). “This is an error that occurs because it is designed to make guesses.” Competitors such as Google (Gemini), Open AI (ChatGPT), and Meta (Rama) are said to be focusing on minimizing such errors.

According to a recent Stanford University study of the answers generated by AI models to 200,000 legal questions, the error rates of Open AI’s ChatGPT and Metarama reached 69% and 88%, respectively. To reduce errors and biases in generative models, companies use a process called ‘fine-tuning’. This process is often done by users (employees) who report if the AI’s responses are deemed inaccurate or offensive.

Regarding this suspension, Google explained, “The goal is to maximize diversity rather than specify an ideal demographic classification for the images Gemini creates,” but added, “Guidelines for diversity may actually lead to overcompensation.” According to a joint study by the University of Washington and others in August last year, the results showed that “AI models have different political biases depending on the development method.” The researchers stated in the paper, “We found that Open AI’s products have a strong left-wing tendency, while Meta’s Rama is closer to a conservative stance.”

Google faced difficulties related to AI even a year ago. This is because Bard gave the wrong answer at the demonstration hall officially announcing the launch of Bard, a new search engine equipped with AI. At the time, when asked which telescope was the first to photograph planets outside the solar system, Bard answered that it was the James Webb Space Telescope (JWST), but in reality, it was the Very Large Telescope of the European Southern Observatory in 2004. At the time, Google’s stock price plummeted nearly 9% in one day due to a wrong answer.

See More:

Biden Met With Navalnys Family: More Than 500 Russia Sanctions Imposed Tomorrow

US Odysseus Successfully Lands On The Moon First Private Sector

The First Successful Lunar Lander Odysseus In The US NASA Goes To Mars With ‘Cost-Effective’ Strategy

New York Stock Market Started Higher On Nvidia’s 15% Surge

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *