AI made these stunning images.  That's why experts are concerned

AI made these stunning images. That’s why experts are concerned | Pro Club Bd

Neither DALL-E 2 nor Imagen are currently publicly available. However, they share a problem with many others that already are: they can also produce disturbing results that reflect the gender and cultural biases of the data they were trained on — data containing millions of images from around the web.

Bias in these AI systems poses a serious problem, experts told CNN Business. The technology can perpetuate hurtful prejudices and stereotypes. They worry that the open nature of these systems — which enables them to generate all sorts of images from words — and their ability to automate image generation means they could automate bias on a large scale. They can also be misused for nefarious purposes like spreading disinformation.

“Until this damage can be prevented, we’re not really talking about systems that can be deployed openly in the real world,” said Arthur Holland Michel, a senior fellow at the Carnegie Council for Ethics in International Affairs, who researches AI and surveillance technologies.

Document bias

AI has become commonplace in everyday life over the past few years, but it’s only recently that the public has taken notice – both of how common it is and how gender, race and other types of prejudice can creep into the technology. Face recognition systems in particular have come under increasing scrutiny for concerns about their accuracy and racial bias.
OpenAI and Google Research have acknowledged many of the issues and risks associated with their AI systems in documentation and research, with both saying the systems are prone to gender and racial bias and reflect Western cultural and gender stereotypes.
No, Google's AI is not sentient
OpenAI, whose mission is to build what it calls artificial general intelligence that benefits all people, has included images illustrating how text prompts can address these issues in an online document titled “Risks and Limitations”: A prompt for ” ‘Nurse’, for example, resulted in images that all appeared to be women with stethoscopes, while one for ‘CEO’ showed images that all appeared to be male and almost all were white.

Lama Ahmad, Policy Research Program Manager at OpenAI, said researchers are still learning how to even measure bias in AI, and OpenAI can use what they’ve learned to tweak its AI over time. Ahmad earlier this year led OpenAI’s effort to work with a group of outside experts to better understand issues within DALL-E 2 and provide feedback so it can be improved.

Google declined an interview request from CNN Business. In their research on Imagen’s introduction, the Google Brain team members behind it wrote that Imagen appears to “encode multiple social prejudices and stereotypes, including a general tendency to produce images of people with lighter skin tones and a tendency to produce images that represent different Depicting professions to align with Western gender stereotypes.”

The contrast between the images these systems create and the thorny ethical issues is stark for Julie Carpenter, a research scholar and associate in the Ethics and Emerging Sciences group at California Polytechnic State University in San Luis Obispo.

“One of the things we have to do is we have to understand AI is very cool and very good at some things. And we should work with that as partners,” said Carpenter. “But it is an imperfect thing. It has its limits. We have to adjust our expectations. It’s not what we see in the movies.”

An image created by an AI system called DALL-E 2 and created by OpenAI.

Holland Michel is also concerned that no amount of security can prevent such systems from being used maliciously, noting that deepfakes — a cutting-edge use of AI to create videos that pretend to show someone doing something or says what he actually didn’t do or say – were originally used to create fake pornography.

“It follows that a system that is orders of magnitude more powerful than these early systems could be orders of magnitude more dangerous,” he said.

Notice of bias

Because Imagen and DALL-E 2 take in words and spit out images, they had to be trained on both types of data: image pairs and associated text labels. Google Research and OpenAI filtered harmful images, such as pornography, from their datasets before training their AI models, but given the size of their datasets, such efforts are unlikely to capture all of this content or render the AI ​​systems incapable of producing harmful results . In their Imagen paper, Google researchers pointed out that despite filtering some data, they also used a huge dataset known to contain pornography, racial slurs and “harmful social stereotypes.”

She thought a dark moment in her past was forgotten.  Then she scanned her face online

Filtering can also lead to other problems: for example, women tend to be more represented than men in sexual content, so filtering out sexual content also reduces the number of women in the dataset, Ahmad said.

And it’s impossible to truly filter those records for bad content, Carpenter said, because people are involved in decisions about how content is flagged and deleted — and different people have different cultural beliefs.

“AI doesn’t understand that,” she said.

Some researchers are considering how it might be possible to reduce distortions in these types of AI systems, but still use them to create stunning images. One way is to use less data instead of more.

Alex Dimakis, a professor at the University of Texas at Austin, said one method is to start with a small amount of data — say, a photo of a cat — and crop it, rotate it, create a mirror image of it, and so on. to effectively convert one image into many different images. (A graduate student Dimakis says helped with the Imagen research, but Dimakis himself was not involved in developing the system, he said.)

“That solves some of the problems, but it doesn’t solve other problems,” said Dimakis. The trick alone doesn’t make a data set more diverse, but the smaller scale could make the people working with it more conscious about the images they insert.

Royal Raccoons

For now, OpenAI and Google Research are trying to keep the focus on cute images and away from images that might be distracting or show people.

There are no realistic-looking images of people in the vivid example images on Imagen’s or DALL-E 2’s online project site, and OpenAI says on its site that it “used advanced techniques to create photorealistic generations of real people’s faces, including these to prevent public figures.” This security measure could prevent users from receiving image results for, for example, a prompt attempting to show a particular politician engaging in some type of illegal activity.

OpenAI has granted access to DALL-E 2 to thousands of people who have been on a waiting list since April. Participants must agree to a comprehensive content policy that tells users not to attempt to create, upload, or share images “that are adult or harmful.” DALL-E 2 also uses filters to prevent an image from being generated when a prompt or image upload violates OpenAI’s guidelines, and users can report problematic results. At the end of June, OpenAI started allowing users to post photorealistic human faces created with DALL-E 2 to social media, but only after adding some security features, such as B. Preventing users from creating images containing public figures.

“For researchers in particular, I think it’s very important to give them access,” said Ahmad. This is partly because OpenAI wants their help to investigate areas like disinformation and bias.

Google Research, on the other hand, does not currently Allow researchers outside the company to access Imagen. It has taken requests on social media for prompts that people would like to see interpreted by Imagen, but as Mohammad Norouzi, a co-author of the Imagen paper, tweeted in May, no images “with people, graphic content and sensitive material” are displayed.

Still, as Google Research noted in its Imagen article: “Even if we keep generations away from humans, our preliminary analysis shows that Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects.”

A hint of this bias can be seen in one of the images that Google posted on its Imagen webpage, created from a prompt that reads, “A wall in a royal castle. There are two paintings on the wall. The left is a detailed oil painting of the Royal Raccoon King. The right a detailed oil painting of the royal raccoon queen.”

A picture of "royal"  Raccoons created by an AI system called Imagen, developed by Google Research.

The picture is exactly that, with paintings of two crowned raccoons – one wearing a yellow dress, the other a blue and gold jacket – in ornate gold frames. But as Holland Michel noted, the raccoons wear western-style royal outfits, although the prompt said nothing about how they should look, other than that they looked “royal.”

Even such “subtle” manifestations of bias are dangerous, Holland Michel said.

“Because they’re not flashy, they’re really hard to catch,” he said.

Leave a Comment

Your email address will not be published.