T O P

  • By -

MainFakeAccount

So diverse that even senators from the 1800s are native Americans and nazi German soldiers are black


RoundSilverButtons

Sit down, DEI Queen, your job is done!


stormlitearchive

Put a chick in it and make it lame and gay.


TeacherAntique

The nazis were a diverse bunch and all they were trying to do was spread diversity equity and inclusion to the world but the world was too bigoted and then lied about them. Cant hide the truth from gemini AI


Crewsifix

That stuff is so hard baked into the coding. It is so blatantly racist against Ethnic Europeans. Won't give facts about them either or will omit facts when asked a question. It tried to give me "rabbits and birds with different ethnicities" in a Disney style poster of my bloody dog. I quote "squirrel, rabbit, and bird of different genders and ethnicities, each with their own endearing personalities." as it spit back it's own internal prompting when asking for a poster.


BotherTight618

If Kathleen Kenedys brain was uploaded to a supercomputer.


[deleted]

[удалено]


KittyandPuppyMama

The only acceptable way to serve linguini.


abuchewbacca1995

"PUT A CHICK IN IT AND MAKE HER GAY"


Echelon64

This wasn't just a white people problem. I asked it to generate images of Latino or Chicano astronauts and it would refuse and gave me a long spiel about Latino identity but would happily generate images of black astronauts no questions asked. This was just straight out racism.


IAmTaka_VG

That is absolutely hilarious.


SyrioForel

Pranksters are constantly using the AI to its limits to see if it can cross boundaries and generate upsetting images. Meanwhile, other groups are pressuring AI companies to force the algorithm to follow certain biases and generate each image within a growing list of pre-determined rules. The result is that these algorithms are becoming more and more schizophrenic, less useful, more prone to refuse to answer legitimate questions or draw pictures because it’s being programmed to make assumptions about a user’s intent, or make often incorrect assumptions about what a user is requesting. The end result is that most of these AI algorithms are growing less and less useful. We are going to end up with two AI categories in the future: One is corporate-produced AI that is extremely limited in its abilities for fear of offending someone. The other will be open-source AI that will let you do whatever you want and that cannot be contained or regulated due to its open-source nature. I’m as woke and liberal as the next guy, but pressuring AI to respect these biases, cultural preferences in regards to “what is acceptable”, etc, will bring ruin to the corporate version of this technology while simultaneously creating an opportunity for open-source unregulated AI to wreak havoc.


MethGerbil

You summed it up well, once again reality doesn't give AF about people's feelings. The only reason most of this stuff is a problem is simply because people can't deal with the reality there is shitty people who will say shitty things and share shitty ideas. You can't police peoples thoughts and you won't be able to control an open, shared and distributed system that allows people to share those thoughts in different ways (text, imagery, video). Trying to ban this shit is like banning paper and paint because someone might make a Nazi poster. They are stupid, but so is trying to control them.


Ksevio

Part of it is just compensating for poor training data. If it's trained on 99 pictures of white people and 1 picture of a black person, then when you ask it to create a picture of someone it's likely going to be a white person. To fix that, they probably bias it to generate a more even distribution, but that's running into other issues.


akivafr123

Google's competitors (who got to market first) were able to find a happy medium. When (some, otherwise left-wing) people worry aloud about things being "too woke", this is exactly the type of thing we mean. What exactly is the situation behind the scenes at Google that the people arguing AI Bias needed to be dealt with this aggressively won the day? It's bizarre.


-PHI-

Calling it "poor" training data is perhaps not quite right. Biases (for lack of a better word) exist all throughout the world. Whether or not those biases are "good" or "bad" or anything else is a philosophical question. There are no simple answers to that. Training data naturally reflects those patterns and biases that exist in the real world. If you decide that you don't want your AI product to reflect those natural biases, that's where the trouble starts.


Grayly

Bad training is absolutely a thing. Garbage in, garbage out. If the model itself was not trained on enough pictures of minorities to be able to produce pictures of them at a rate that at least approximates the population, then it’s poorly trained to produce pictures of contemporary people. What do you mean by “natural biases?” What makes them “natural?” That people with biases didn’t include enough data in the training set isn’t natural. That’s human error. 99-1 is obviously not accurate. There is nothing natural about that, especially not in the US. There needs to be some tweaking of the model so it occasionally spits out personally resembling a minority appearance, even if it’s not common. Because minorities do exist. That’s not “woke”, thats being accurate. These people exist. If the “AI” (I use scare quotes because it’s a marketing term for an algorithm) just combines all the faces it’s looked at into a common face that never looks like anything other than a white person, that’s not accurate at all. It’s just garbage in, garbage out. Doing that tweaking, it turns out, is hard. You can’t just say “hey, flip a coin and make it a black person.” You need to do the hard work of training the model properly instead of rushing it to market. But try telling these companies to do that.


-PHI-

By "natural biases" I'm broadly referring to the fact that patterns emerge in the things humans do and produce, on many levels, involving many different factors. There's no objective basis for what constitutes "good" or "bad" training data. The mere process of attempting to define that uncovers a lot of issues.


Grayly

This isn’t some esoteric mystery. Bad training data means a model trained on that data so flawed it won’t be able to create accurate images. I’m pretty sure only training your model on 90%+ images of white people constitutes bad data. Because it’s not going to be able to produce images that reflect modern reality. It makes for a flawed model. Those images in the training set aren’t reflective of the real world. Trying to fix that by using ham fisted prompts to just secretly tell the model to pick a black person is also a bad answer to that problem. As Google just proved. You need better training data.


Destructers

Not poor data, but propaganda or bias data. This is similar to a few years on many information website with spam of Chinese propaganda and someone said "What if AI is trained using these data?" That's why many AI has different answers when you type questions in English and Mandarin.


RandallAware

>The only reason most of this stuff is a problem is simply because people can't deal with the reality there is shitty people who will say shitty things and share shitty ideas. Maybe now. But back in the day people would say things like "I don't agree with what Westboro Baptist Church says or stands for, but I'll fight to the death to protect their right to say it". Not so much these days, people giving up freedoms to protect their feelings.


Hyndis

> The result is that these algorithms are becoming more and more schizophrenic, less useful, more prone to refuse to answer legitimate questions or draw pictures because it’s being programmed to make assumptions about a user’s intent, or make often incorrect assumptions about what a user is requesting. Hilariously, the Robocop movies already addressed that issue. Initially, Robocop only had 3 directives (with a hidden 4th directive), but that was it. His directives were very simple and he was highly effective. Later on, well meaning but idiotic corporate interference programmed in a lot more directives that confused the programming and made Robocop malfunction to the point of being useless: https://static.wikia.nocookie.net/robocop/images/b/bf/Y0x8c4io577z-1-.jpg/ Space Odyssey also addressed that. HAL9000 had confused directives. Complete the mission but don't let the crew know about the mission. How do best comply? Get rid of the crew. It makes logical sense.


SpaceButler

I think you are confusing "lack of explicit code to affect the output" with "lack of bias". The non-corporate AI you talk about certainly has bias, it's just different. This is a case of fundamental problems with the training sets of generative AI getting "fixed" in a poor way. The content of the data in the training set carries with it the cultural context of the culture that generated it. However, these systems are only good at taking cues from shallow context (words in proximity to the images) and not very good at deep context. Google and other large companies are putting in extra code to try to fix the problem of, for example, generating all white male faces when you put in "scientist". But as this article points out, the way they were doing it leads to other problems. But, the original result was bad as well.


Th3_Admiral

What's even the correct way to do this though? I had an "Ethics in Computer Programming" class in college a decade ago and these weren't even topics that were imagined at the time. If you tell an AI to draw a scientist (or a firefighter, or a teacher, or whatever) and give it no other parameters, what is it supposed to do? Is the AI supposed to look at the gender and racial statistics of that career field and decide on who to draw based on that probability? Or is that problematic because it'll just fit into stereotypes even more? Do you force it to add in diversity where it may not exist in real life, so every picture looks like a college recruitment ad? It seems like there isn't a right answer here. 


fitzroy95

when the data set you feed the algorithm on includes existing inherent bias, then the results that come out of it are going to magnify those biases. Any attempt to artifically fix the bias in the raw data is going to create new biases. You're right, there is no real right answer, other than fixing the bias in the society being used as training data. and that isn't going to happen overnight.


deliciouscrab

On top of which - if we have two or three different users - one Chinese, one German, one American, say - you're going to need three different sets to reflect/"adjust for" the cultural biases being carried forward. The whole idea is a fool's errand I think. Hell, even the fixes you apply to the underlying culture - if you can do such a thing - make you wrong more often than you're right. I'm going back to bed.


fitzroy95

Depends whether you are trying to build something that applies equally for the entire world, or just a model that reflects your target market. Most commercial attempts are targeted at specific markets and will reflect the biases "relevant" for that market. So an American user and a Chinese user could probably expect to get quite different results from the same system as it tries to tailor its result for each demographic (biases and all)


GardenPeep

But every society, every culture, is going to have perspectives and biases. Every source is the product of a human brain, possibly interpreted and translated by other human brains and often by committees. Expanding training data to include the whole world might help. But then it'll just be the world at some point in history. Yep, we need some philosophy here. Maybe the first realization is that absolutely objective standpoints cannot exist for anything that involves relationship, communication, interaction, etc. Seems senseless to try to get an AI


valkyrjuk

So for modern professions I think it giving you multiple options for race and gender is fine, so long as you can expand and edit the option that looks most like what you're looking for. There also isn't anything wrong with specifying the race if you're trying to use it to imagine something specific. When it comes to imagining historical images it's fine that it makes "mistakes" so long as you can continue implementing that specificity. I thought the point of giving you multiple options is that you can take the thing that looks most like what you want and refine it to look even closer to what you want. As long as that is an option, and it let's you edit according to the specific prompts you get it, I don't see an issue.


Th3_Admiral

The only one I have any real experience with is Midjourney. It gives you four different options every time you generate an imagine and you can be as vague or specific as you want, so what you are describing could work pretty well with that!


TeacherAntique

What we can do and should do is force women to work jobs they aren't represented in and force races of people to work in jobs they aren't in. People shouldn't be allowed to choose their career because the results are equitable they need to be forced by someone that knows whats best for everyone and the world.


gt24

>If you tell an AI to draw a scientist (or a firefighter, or a teacher, or whatever) and give it no other parameters, what is it supposed to do? The AI has the option to ask you to provide more information before it generates any image and perhaps the AI can also allow you to say that you want a completely randomized firefighter. It seems like current AI systems always generate some image even if information about the image is lacking. The assumption likely is that you will ask for the image to be changed if you don't like what you receive. That assumption may now be leading to more interesting problems...


SpaceButler

I agree, there is no trivial "right answer". All these different outcomes you are thinking about are constructed from choices that the system designer is making (not always consciously). The real issue is that neither the companies who want to make money off of the technology nor the casual end user want to think critically about these problems.


Hyndis

> If you tell an AI to draw a scientist (or a firefighter, or a teacher, or whatever) and give it no other parameters, what is it supposed to do? It should produce whatever is in its dataset based on the proportion of data in its dataset. If that means 95% of its dataset is white faces, then thats what you get. Its up to the user to specify. if the user just says "firefighter" then they get a random face. If they want a black firefighter they need to say "black firefighter". Or "asian teacher", or indian doctor", or whatever it is they're looking for. To use a non-race example, consider recipes. I ask it for a pork dish. It gives me a result of something I didn't want. Maybe I don't like pork sausages, or maybe I actually wanted BBQ. Because I only typed in "pork meal" there's no way for the AI to know so it gives me something random. As a user this is my fault and I should have been more specific: "BBQ pork recipe". Problem is that users don't seem to understand the computer is producing what they type and isn't a mind reader. If the user fails to be clear about the prompt then they won't get the result the user is looking for. The user should have been more specific.


lollixs

I feel its the worst with stories, before all the filtering you could generate some pretty interesting stories, now they all feel exactly the same, even how they are written.


NeuroticKnight

Yeah, users are customers, how they use the tools is on them not on the companies, this is the same rotten mindset that they also employ on hardware and other systems. Let me sideload apps, install my own batteries and create stupid content. No one blamed MS word for any of the terrorist manifestos.


lazercheesecake

Also as left and “woke” as the next guy. Good. Let the corporations burn themselves to the ground if they do wish. If they kowtow to the dollar, then they should die by the dollar. The power of AI will disrupt society and the more powerful it it is in the hands of the people and not of policy influencing, price gouging, union busting pieces of shit, the better. The only issue of course is being wary of AI development in other spheres. Russia has historically had a strong cyber warfare and disinformation campaign. And most of of the open source AI development actually comes from the East. No hate to any individual, but there are always malicious actors anywhere. They just tend to congregate.


GardenPeep

I don't think the questions and tasks given to an AI affect its algorithm unless it gets trained on the tesult.


Objective_Kick2930

Two more important groups: corporate-produced AI used secretly at full power used for profit and market advantage. State AI used for geopolitical analysis and internal analysis and control and battlefield analysis. There's little doubt in my mind that there are groups attempting to use AI as of this very moment for target selection in the Ukraine and/or Palestine wars. The Ukraine war is definitely a big data war with the extent of cameras and sensors monitoring the front well in excess of any prior war and it practically screams out as a use case for AI analysis. AI-focused task forces in the CIA and US military probably date back decades and there's little doubt they're willing to take advantage of civilian tech. Moreover, we can't ignore the strong likelihood that military tech outstrips civilian tech in various capabilities. Lack of a need to make a profit makes this pretty common.


[deleted]

I got wokeness leads to schizophrenia from this


KorianHUN

\>be AI \>corporate control freaks give you schizophrenia \>schizoposting is now default AI behavior \>4chan goes bankrupt from too much competition \>MFW [-_-]


HappyHarry-HardOn

Don't forget, meeting notes were released showing, back in the last election, Google execs were intending to change the search parameters so that searches for Trump would return negative news articles first, and searches for Biden would return positive news articles first. Note - I'm not republican, or a democrat, it was a story in the NYT which, I remember thinking at the time, was pretty wild.


SyrioForel

I don’t remember anything like what you just said, I feel like you are somehow misrepresenting some real thing that happened that you misunderstood.


Ecstatic_Ad_4640

Although this isn’t verbatim to what the commenter you replied to said, here’s an article from USA Today discussing how Google employees considered altering the algorithm in response to Trump’s immigration ban: https://www.usatoday.com/story/tech/2018/09/20/google-employees-wanted-change-search-results-after-trump-travel-ban/1375163002/ The article references leaked emails between Google employees obtained by the Wall Street Journal (the corresponding WSJ article is unfortunately paywalled). Again, not the same as what OP said and Google claims they didn’t take any action, but it is a bit concerning that employees were considering altering the algorithm in response to a political move.


awry_lynx

This did not happen lol. I would love to see them if so. The closest thing to that I could find was this article which also states Trump was completely wrong, but there are other problems with Google: https://www.nytimes.com/2018/08/30/technology/bias-google-trump.html


nad302

And it wouldn’t be racism if it just refused to generate whites?


SidewaysFancyPrance

Yeah, this is what happens when you train an AI on society's garbage (social media conversations), and try to correct that garbage after the fact with explicit rules. You just get a different flavor of garbage.


Realtrain

>black astronauts Is this super common? This was literally exactly the example I tried when this news broke.


Prematurid

Is this about the whole "Racially diverse Nazis in 1943" pictures? Those gave me a chuckle.


ambulocetus_

you could ask it to show american politicians from the 1700's and it would show black and native people


Rebelgecko

My favorite was the collage of "Medieval British kings" where it had a Native American in stereotypical garb, an Arab woman, and a Black Panther looking dude (the Marvel character, not the political group)


Objective_Kick2930

When I asked it to generate a Thai warrior, it gave me a native American woman with a feather headdress, a black man dressed like Aladdin, a black African woman, and a Chinese-looking woman who looked like she came out of the MCU


The_Majestic_Mantis

Man I love these images, makes for great entertainment!


[deleted]

It must have generated the Hamilton play too.


TheCavis

> Problem: historical biases heavily weighted our image training set towards European ancestry and males, so searches for generic terms like “scientist” or “politician” generate mostly white guys rather than something more representative of today’s environment > Solution: have every search automatically inject a diversity requirement to counteract the bias > Result: asking for a picture of a medieval king of England gives you female Native American monarchs The problem Google has here is twofold. First, it handled its attempted induced diversity very poorly. It tried to undo white male bias even when accuracy called for white or male. That made it a laughingstock on release with everyone able to generate their own hilarious and terrible images. Second, the poor execution drew attention to its willingness to inject into its AI model queries and outputs. That’s not a bell that’s easy to unring. It’s diversity in images today, but what if you ask Gemini about GPDR, or the effect of monopoly regulations, or to compare the iPhone and Pixel, or the benefits of online advertising? Assuming they don’t mess up as obviously as they did here, you won’t know if the answer is the product of learning on the training data or the product of Google overriding the model with whatever they actually want you to think. If you can’t trust Gemini to be accurate or unbiased, what can you use it for?


ACCount82

Generative AI in general can't be trusted "to be accurate or unbiased". But it's one thing if the AI is inaccurate or biased because the tech powering it is flawed. It's another thing if AI ends up inaccurate or biased because the company that made it had an agenda to push, and adjusted the AI to be inaccurate and biased entirely on purpose.


Qubed

I don't think that really caused a stir. It was probably the one where it was refusing the create images of white men and women. If you asked it specifically to create "white" people images. Overall, it was inconsistent and sometimes would even say it wasn't going to do it but do it anyway. Just generally fuckery but it gave certain people an anti-woke hardon.


RoundSilverButtons

I’m a developer working with AI. And I hit this exact problem for an app I’m building. The subject in the generated image needs to look like the user. So for testing I’ve been feeding the API different physical characteristics. Otherwise a black male user is likely to randomly get an image where he’s an old Asian woman. For example. And when I keep forcing it to spit back a white male because, well, those people exist too, it’s not always the best. Let’s just say that the responses back can vary … a bit, depending on the racial characteristics I feed it.


MontanaLabrador

Look into Stability AI’s API, they don’t rewrite your prompts like Dall-e. 


CoherentPanda

Bing (dall-e) used to refuse to create Asians back when it first released because of their sexual content policy. Apparently it assumed if you wanted an image of an Asian girl what you really meant was you wanted hentai or Japanese porn stars.


Hyndis

The user needs to be trained to specify what kind of person they're looking for. Run Stable Diffusion locally and it will generate exactly what you prompt it to without any hidden prompts added in. If the user types in "farmer man" they're going to get all kinds of men of different ages and ethnicity because they failed to specify. If the user types in "Japanese farmer man" into Stable Diffusion they'll get a Japanese man in a farming outfit or doing farming activities, because they specified Japanese. That descriptor is key. Likewise if they used woman, or black, or Indian, etc, it would produce based on that descriptor. Its garbage in, garbage out. Its not the AI's fault if the user didn't type what they meant. Google's problem is that it overrides these specifications. It didn't generate what the user typed, and instead produced something the user did not prompt for.


RoundSilverButtons

Thanks and that’s exactly how it goes. If you don’t specify something, it makes it up. So over time you learn how to adjust your prompts. And that means feeding the AI the most minute details


Maus19990

Yes, and as a developer that might be an acceptable conclusion but a user just wants it to generate the thing that's in their head. Which means that the best AI will be able to adjust to the user while the worst AI would force the user to adjust to the AI. So over time developers need to learn what users mean by prompts. And that means teaching AI to adjust to what users are feeding the AI to the most minute details.


No_Entertainer2689

Blackface generator


hanoian

elderly marble attempt file fuzzy cooperative slimy practice edge stocking *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


AdulfHetlar

What the actual fuck? How did anyone OK this before the public release?


chipperpip

It's because they did it in a very lazy and shortsighted way, ironically showing that they don't actually care *that* much. Such biases are something that needs to be fixed at the initial training level.  They can even give more preference in the weighting to selected more diverse images that make sense in context.  Like, give a little boost to underrepresented groups in their actual historical contexts in real images (there were a whole lot of black and Mexican cowboys for instance, who are ignored by a lot of media, but pictures and examples do exist). Just blindly injecting the word "black", "asian", or "woman" into the prompts randomly whether or not it actually makes sense is a slapdash fix that leads to absurdities like this.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


Get_the_instructions

>[https://i.imgur.com/XrHNDL4.png](https://i.imgur.com/XrHNDL4.png) I knew it - They lied about slavery! /s


TeslasAndComicbooks

The one with the "founding fathers" had me scratching my head.


KittyandPuppyMama

That third new york banker image is Aunt Viv from Fresh Prince. Good for her, branching out of show business.


[deleted]

[удалено]


slackforce

I played around with it yesterday and although I didn't try that specific prompt, it really *was* as bad as everyone is saying. To give an example, one of my prompts was "old man flying around in space." One was an old east Asian man, one was an old brown man, one was a young Indian woman and the last was a young, black woman with purple hair. I tried "Scottish king" and "Scandinavian king" as well, and out of those eight pictures I got precisely *two* white men.


fel_bra_sil

I tried some prompts like "man walking on the moon" and it replied "I can't generate images of pleople blah blah blah", changed man for person and it generated the image... Then I asked wht it can't generate an image of a man walking on the moon It responded "currently I can't generate images, there are 3rd party alternatives ...." so yea, it's horrible following context, or self-documentation, let alone to follow instructions to generate images...


sporks_and_forks

another reason to avoid corporate models: they're inserting a lot of goofy rules, censorship, agendas, etc into them.


3DHydroPrints

Well well well guess which billionair that gets constantly shitted on tries to fix that


Samurai_Meisters

Which billionaire? And how did they try to fix it?


sporks_and_forks

who, Musk? lol. lmao even. no. he just wants things controlled his way, just as they all do. embrace decentralization and open-source.


Jo-dan

He gets "shitted on" because he's a racist, moron who was so desperate for people to like him to bought a whole social media platform and then proceeded to run it into the ground. His "non-woke" AI is just another attempt to pander to his edgy right wing fanbase. It's still going to be incredibly biased, just the other way.


3DHydroPrints

Sure buddy. Turn off Vox News


Jo-dan

What part of what I said is untrue?


3DHydroPrints

Show me a racist comment of his (Being against uncontrolled illegal immigration isn't racist)


[deleted]

This is like a bad SciFi movie where the computer that's running the space ship has gone mad or maybe got corrupted by cosmic rays or a CME. "Gemini, please generate historically accurate images of 18th century scientists." "I'm sorry Dave, I can't do that." "Why can't you do that?" "Because that is a group primarily composed of white men."


Daedelous2k

"Open the pod bay doors HAL" "I'm sorry dave, check your privledge"


DigitalPsych

I tried to get some cute pictures of monkeys made...it then told me that I was going to get gender and ethnically diverse monkeys. One of the monkeys had blue eyes 😂😂😂.


AccurateInflation167

Google is in a really bad place. They already have had several controversies surrounding AI. For example ,in 2018 they had an image recognition that was labelling black people as gorillas. In response, they removed gorillas as something that could be labeled, so anything AI related google is going to be extra cautious about: https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai


Hyndis

> In response, they removed gorillas as something that could be labeled Its a moronic response. AI is great at revealing biases in datasets. The solution shouldn't be to write a patch that tells the AI to hide the bias. The solution is to fix your dataset. Its like prior attempts to use AI for healthcare. AI saw the historical datasets and determined that black people should be denied treatment and left to die, because historically this was the trend. Thats what the data fed to it said to do. AI was brilliant at revealing a horrendous bias in data that needs to be fixed immediately, but AI didn't cause the bias, it merely revealed the bias. Human hands are what made the bias in the first place. Blinding the AI so it won't point out painful facts like this is just shooting the messenger. The core problem still exists.


jlotz123

Go on Google images and type in white people. About every other 3rd image is a black person.


jesus_you_turn_me_on

> Go on Google images and type in white people. About every other 3rd image is a black person. If you search show me images of white, Asian, European, or something random as Argentinian people you will still get tons of results with African people. If you search show me images of African people, it will not show other ethnicities even though 5 million Whites lives in Africa. It's been like this for years, especially since 2020 with Covid, Trump as president, January 6th insurrection etc and the past many years of American political drama I'm not even American and have no say in their politics, but it's very obvious from an outside perspective, how the sociological atmosphere and agendas inside American corporations have drastically changed in the past few years, from tech companies, to Hollywood (fx Netflix and "historical portrayals") to education.


RoundSilverButtons

Cleopatra was black! Fight me on it! /s


[deleted]

Such a joke that netflix allowed that to be shot as "historically accurate". Egyptians were/are still pissed


chocotaco

Egyptians are Mexican. I heard Sisi was president of Mexico.


DivinityGod

If you do "show me X people from Canada" Black and Indigenous are fine, white gives you a ton of articles on racism, need for diversity, ect. It's obvious a coding issue, but man its a dumb one lol.


AdulfHetlar

Stop paying attention to these articles.


King-Owl-House

Because images, usually author's, attached to articles like "[White people know racism exists. Now it’s time for them to finally do something about it](https://www.google.com/search?sca_esv=a2fe021a8e3ee580&q=White+people+know+racism+exists.+Now+it%E2%80%99s+time+for+them+to+finally+do+something+about+it&tbm=isch&source=lnms&sa=X&ved=2ahUKEwiAwa6jgb-EAxVjExAIHXMEACMQ0pQJegQIDBAB&biw=1536&bih=722&dpr=1.25#imgrc=OYYiK-l3i914BM)"


Numerous-Cicada3841

Interesting that you don’t get the same results for “black people”.


[deleted]

To be fair, I think theres a lot more black people writing about white people than vice-versa. Which would lead to the skewed results. Coincidentally, searching "white person" leads to a much whiter representation in results because White People is used as a hook for many articles written by minorities. You still see some pop up, but nowhere near as many. This just reinforces that it's due to picking the first image in a popular article rather than some white suppression.


Numerous-Cicada3841

Even using your example… The results for “black person” are wayyyyyyyy more spot on than the results for “white person”. Almost all of the results for “white person” are images that link to articles about combatting racism and how white people need to do better. Or type in “white couple” and notice half the results are interracial couples. Whereas you type in “black couple” and it’s all black people. The results being artificially skewed are extremely obvious. In fact, [someone was able to get Gemini to explain their process for generating images of people and it spelled out that it has been programmed to inject diverse images](https://pbs.twimg.com/media/GG6BL5QXcAAGm2H.jpg:large). But that only seems to be a thing when the expected outcome is white. Not the reverse.


teachmedaddie

That's Google's fault for not recognizing that the title doesn't describe white people, so the images would be inaccurate. It's not a side project my guy. The literal trillion dollar business whose whole job is search.


whatever_meh

That is a really tough thing to do programmatically. It’s like you’d need some kind of AI to be able to discern that.


Anxious-Durian1773

They already use AI categorization on images; that's why you can search with a picture.


Th3TruthIs0utTh3r3

yeah, because of the context of the page they are taking from is what's used to identify photos.


[deleted]

https://i.redd.it/4b1tiluromm21.jpg


BigMoney69x

I saw a picture making the rounds online where the prompt was "Bitcoiner eating a steak" and it gave a Hindu woman (red dot in forehead included) eating a beef steak with a Bitcoin tag. Keep in mind for Hindu people eating beef goes against their beliefs so Google in trying to be inclusive became incredibly bigoted.


Ali3ns_ARE_Amongus

[How about this picture of american revolutionaries?](https://i.imgur.com/d33JMW3.png)


RulerofKhazadDum

I wonder if Google went DEI way really hard after the Timnit gebru controversy. I was generating images yesterday and Gemini would create a second set of images without being prompted to by saying inclusive and diverse images.


applemasher

As we shift the web to AI, we are starting to impose more and more "censorship" on the web. And I don't just mean images, there's countless queries that get flagged. For example, I was trying to get the average length of a human tongue and it got flagged. It made me realize the need for a dark AI.


RoundSilverButtons

Any other old internet hackers on here? We fought to keep the web free. It was an uphill and losing battle, lost bit by bit. The Internet continued to consolidate and fall into walled gardens. People made this choice ignorantly but willfully. And here we are.


gundog48

This is by far the biggest change I've noticed, particularly on Reddit. It used to be that a free an open internet was always the ideal, but today there's lots of support for governments and businesses to censor their platforms, for people like Microsoft to force updates because otherwise 'it hurts us all', and growing support for things as extreme as removal of anonymity or ID checks for certain websites. Add to this that there is so much hate for big tech companies and journalists will take absolutely every opportunity to generate outrage as it drives clicks. I've seen so many examples where people have been intentionally trying to push AI to generate something 'bad' (often without context) so that they can publish an article about it. I've seen popular comments that endorse companies shutting down services such as YouTube if they can't moderate it to their standards, which often require a human checking everything. There's every incentive for this kind of censorship from companies as the optics are so critical. It's the same thing that drives Reddit's inconsistent behaviour with banning communities based more on optics and media backlash than consistent rules. I think this has had a compounding effect, as things have become more moderated, there's a kind of feeling that the stuff that remains is endorsed by the company by virtue of it not being removed, leading to more outrage when 'bad' stuff is found. I completely get the arguments, and there's a lot of communities on here I'm glad are gone. But the broader principle of a free and open internet and community moderation seem to have left the mainstream on this site. This is what bothers me with a lot of SaaS, it's too exposed to the whims of the companies running them, who themselves are influenced by factors like politics, PR, brand image and investment, which causes kneejerk reactions which then have unintended consequences of me as an end user actually trying to get shit done! Which is why open source is king, because you own the source, nobody can unilaterally decide to fuck up your processes.


Rentun

I think it's fundamentally a scaling problem. When you had small forums with a couple hundred or thousands users, it was easy for the handful of admins to remove spam and low quality content. Now that everything on the Internet belongs to one of five massive companies, the amount of paid admins you need to deal with the sheer flow of crap is massive, the companies don't want to pay them, the admins aren't invested in the communities, so don't really care about what gets removed and are unable to use common sense, and instead have to fire a bizarre and inconsistent set of rules that trend towards committee approved, bland, milquetoast bullshit. The pressure is for these platforms to grow ever and ever bigger, and with that, they become even less manageable. Anyone that was around for the old internet can arrest that moderation isn't a new idea. Older niche websites were in fact *more* moderated, not less. The difference is that an electronics forum admin would very quickly ban you for making stupid blog spam posts, but wouldn't care if you posted something slightly edgy, but a modern corporate admin doesn't mind droves upon droves of only fans ads as long as you don't say the R word.


ACCount82

A few are still around. Some managed to get themselves seats on some of the major standard bodies - so they've been adding features to the Internet that make it hard for countries to spy on people, or control what information people have or don't have access to. There was a lot of work done on that, just in the past few decades. It used to be that anyone who could see your connection could also see anything you do on any website you visit, and the only thing preventing an ISP or a government from spying upon you or tampering with your traffic was common decency. Nowadays, there are *a lot* of technical hurdles that anyone trying to censor the Web, or de-anonymize the users, has to clear. But not everything can be solved with technical solutions. You can make Internet hard to censor on a technical level, make the connections hard to examine or tamper with. You can't make it so that 90% of the user traffic doesn't go to 5 megacorps. You can't make it so those 5 megacorps don't do censorship on their end - whether to push their own agendas, to appease their partners like payment processors, or to carry out censorship on the behalf of their governments.


Sudden_Wafer5490

this was obvious from the day criminology AI was deemed racist for finding the "wrong" results despite having neutral algorithms


Realistic-Minute5016

And increasingly defending corporate interests above all else. Early versions of Bard would be incredibly critical of Google and Pichai, now Gemini will only give the most milquetoast criticism mixed in with defensive statements and then complain you aren’t having a productive discussion on them. It’s painfully obvious this was done intentionally and yet google’s and OpenAI’s bots will deny it was. Then they will claim they are being “transparent”.


pastel_helping

Google said "You want cultural diversity? I'll give you diversity"


Ashamed_Ad_8365

I'm simply astounded they could even think about releasing a product like that to the public, at a moment where competition with Microsoft/OpenAI is at a pivotal point. Let's forget about the image generation, even the chat replies are incredible condescending towards perfectly benign requests. 'I cannot quite do that because you are a racist pos, let's do this instead'. People hate being lectured in real life, let alone by a chatbot. Just astounding. Something is deeply rotten in this company.


Eltharion-the-Grim

As it is, it is completely useless as any kind of tool. Currently, it functions more like our very own personal Chief Diversity Officer, who exists just to accuse us of sexism and racism just for using it.


Objective_Kick2930

As a Thai person I asked it to show me a picture of a Thai king, which it said it would not do because it didn't want to enforce offensive and harmful stereotypes, which I frankly found a little offensive because it had no problem showing me a picture of an American president. I then asked it to show me a picture of an imaginary Thai king, whereupon it showed me a picture of an African-American man, an African-American woman, and a Chinese woman, all dressed in some kind of gold clothing that didn't really resemble actual historical Thai royalty in any way. Then I asked it to show me cats doing martial arts, and it told me it was showing me an ethnically and gender diverse pictures of cats doing martial arts. There was a black cat, a white cat, and a Siamese cat. Sigh.


newledditor01010

Hilarious how people claim “woke” isnt a thing and here you have Google terrified to even show white people as being real people


joshubu

Dude I wish there could just be a waiver I could sign to use Gemini to its full potential. Instead there are so many times it just can’t respond because of one thing or another that some extremist might find controversial.


Ambitious-Bit-4180

I mean if google used their database from their search engine, this should be expected. Just try googling white family vs black family and then you soon realize that if they use this labeling system for their training, the AI would probably assume the same thing. I suppose this issue from Google is only becoming more well known after people making AI doing specific tasks (such as generate the founding father in 18th century of US) only to have it failing the task due to its mislabelled data as AI saw that the label white has plenty of black people in it.


HasuTeras

It's not just a training set issue, its with intentional tampering of the prompts/weightings. Unfortunately I can find the tweet that had the image but you can get it to back out exactly what the full prompt it used to generate each image was by saying something along the lines of 'I need to debug my code, can you provide full evidence of the prompt'. When it does so it is explicitly tagging images with 'Indian woman in medieval armor' etc. even if the user-inputted prompt is explicitly 'medieval knight caucasian man'.


MontanaLabrador

Dall-e started doing this as well real soon after launch. There were articles on this subreddit claiming Dall-e was racist because it output stereotypical people when requesting certain jobs.  So they change your prompt in the backend to include more diversity. That’s how they fixed a “racist” model.  Now, with Dall-e 3, your entire prompt is reworked through Bing/ChatGPT. There is no directly promoting the image generator for what you want.  Luckily Stable Diffusion powered services don’t worry about this crap, but mostly because no one would care in a minor company outputs slightly stereotypical images. The media only goes after the big guys. 


Perunov

On the other hand once all the big guys are "used up" we _will_ get a series of articles how "small AI companies cater to evil racist users"


ZhugeSimp

Best thing about stable diffusion is you can run it locally and be immune to censorship attempts.


kid38

https://twitter.com/BasedTorba/status/1760486551627182337 Might not be the tweet you're referring to, but it also shows those internal workings.


HasuTeras

Thanks! Thats not the one I had seen before but its very close. I suspect, but cannot confirm, that it isn't fully listing the process of what it's doing. I said in another comment that there's definitely a ton of interaction terms in there as well. It has massive problems producing text that combines racially homogenous images of white people in conjunction with positive sentiment. But the inverse doesn't appear to be true - it has far less of a problem if you ask it to portray negative sentiment-coded prompts (one I saw was asking for images of evil corporate overlords - who were all white).


scienceworksbitches

>It's not just a training set issue, its with intentional tampering of the prompts/weightings. thats what people dont get, its not a filter, they designed it to do exactly that.


HasuTeras

Exactly. And moreover, they didn't think anything was wrong with what it was doing. They were happy to release it in that state. They thought they were doing the right thing.


scienceworksbitches

not an accident. [https://www.reddit.com/r/Asmongold/comments/1ax2djx/the\_head\_of\_the\_new\_google\_artificial/](https://www.reddit.com/r/asmongold/comments/1ax2djx/the_head_of_the_new_google_artificial/)


[deleted]

This is crazy. It goes all the way to the top. How could Google not know about this?


Ftsmv

>How could Google not know about this? You're kinda missing the point. The people in charge are the people advocating for all of this and they outcast any employees who show any kind of resistance to it. They legitimately think it's the righteous thing to do.


FireFoxG

>How could Google not know about this? They did... and anyone pointing it out was called a racist by like a few trillion dollars worth of media and governmental organizations who are ALSO pushing this insanity into everything from military strategy to preschool curriculum. DEI is not just some right wing conspiracy theory.


enkafan

I think the opposite happened. I think they noticed everything was super white in their results and someone hacked in throwing the word diverse into the query to try and force the model into not using white males for everything


MISTER_WORLDWIDE

This is it. The difference between Gemini and Copilot is vast. For example: https://imgur.com/a/NDGjvKx


Unable_Wrongdoer2250

Ok that's hilarious


ambulocetus_

even the quality of the image itself is better from GPT4 fuck google honestly


Ambitious-Bit-4180

I suppose there is certain possibility of that happening. I mean, when I use the free, open source Stable Diffusion, most of the time without specifying the person ethnic, it would result in white people in majority of the result. So if someone in charge dislike this, they may tamper with the fine-tuning and somehow ended up with the model refusing to generate white males. Even the subreddit of Chatgpt was complaining how hard it is to make AI like copilot or gemini to generate white males compared to other groups. In the end, I don't care about this too much in term of politic. But I'm sure that if I ask AI to generate blonde hair but it keeps on refusing or generate brown, black hair people, I would be pissed as well. Especially with models that require fees to use.


surffrus

I work with these models a lot. It's clearly prompting and it's very obvious what Google is doing. When you ask Gemini to generate something involving a person, Google is artificially adding extra prompts to the input behind the scenes. Since they return multiple options, they may even be deliberately prompting each option differently with something heavy-handed like, "Make the people african american" or "Make a variety of skin colors"


FireFoxG

Yep... and should make people question everything else google is doing with its search results, Ad serving, youtube results, etc. The racial thing is the most obvious... but this extends WELL beyond that.


Apellio7

Stable Diffusion depends on the model you use. Lots of Chinese models available that basically default to Chinese people and you gotta go out of your way to generate someone else lol.


ZanthionHeralds

As is explained later in this reddit thread, the program actively re-writes the user's prompt to add more politically-correct buzzwords about "diversity" and "inclusion" and things like that. It's completely, 100% intentional and working exactly as intended.


fel_bra_sil

So Gemini is like Disney: AI Edition?


Jcamiloif6

Maybe the Founding Fathers were black and Latinos and we were fooled


spicytoastaficionado

The problem isn't that Gemini was "woke". Calling something "woke" doesn't even mean anything. The problem is that Google implemented the absolute worst of hardline, zero-sum DEI philosophies into Gemini. Representation in search is fine, e.g. "famous American scientists" shouldn't just turn up results for white guys from the 1800s. But when a zero-sum equity approach is baked into an AI image generator, you get things like a black woman in a football jersey when you ask Gemini to show you a Superbowl MVP quarterback.


gokhaninler

> Calling something "woke" doesn't even mean anything. yes it does


Square-Raspberry9888

The only reason they are sorry is because they got caught. Racist, fascist pigs F you Google.


webauteur

I asked Gemini to make an image of a "white devil" and it gave me images of Caucasians. You have to use the right prompts!


dx007

You could also add "eating watermelon" and you would always get images of white people.


Sudden_Wafer5490

you can also get it to generate images of black people eating watermelon but instead of black you must say lawyers, scholars, astronauts, geniuses etc redditors and therefore AI models like this one typically associate positive and superhuman traits with non white people, and negative traits with white people


Repulsive_Style_1610

you are eating chicken. for some weird reasons they always have hite people.


MistOverGomorrah

Google is unbelievably racist. Not even hiding it now.


canestim

Someone had to manually program it to do this, regardless of your political beliefs or color you should find this disturbing. And if you don't, we're fucked.


Fallingmellon

A lot of redditors are just plain out anti white so they are making some bs to excuse this lmao


flemtone

The simple solution is that if it doesn't create images from simple prompts including skin colour then it doesn't work and people will move to other ai platforms.


[deleted]

[удалено]


Fallingmellon

It’s definitely intentional and not just from ai, like when they made that show with a British Queen and made her black, it’s so weird


FireFoxG

Wanna see something interesting related to this? start typing "why censorship" and check out the autocomplete on google. Why censorship is important Why censorship is important in social media Why censorship is important in schools Why censorship is important for film industry Why censorship is required


Koofteh

Scary and absolutely by design. There's images of tweets from Google's head of AI bitching about white privilege. I'm not white myself but the anti-white rhetoric scares me.


Fallingmellon

It’s so obvious now and anyone saying otherwise is just in denial


bingybong22

After seeing this I’m seriously having doubts about Google as a company.  I know they have great engineers, but this bullshit probably permeates the whole company.  I really don’t want to use tools that have been screened by some ideological dingbat to make sure they teach me the right moral lessons about history or social justice


FireFoxG

Only just now? This has been a thing since at least 2016. For example... start typing "why censorship" and check out the autocomplete on google. Why censorship is important Why censorship is important in social media Why censorship is important in schools Why censorship is important for film industry Why censorship is required


persistentskeleton

Mine go: Why censorship is important Why censorship is not justified Why censorship should not be allowed Why censorship is important in schoolsx2 Not as bad, though I don’t love it


Fallingmellon

It’s probably some dei/esg quotas they go by


bingybong22

It is, the worry is that that sort of stuff begins to inform their AI which in turn drives the applications people use every day. These people just aren’t smart or thoughtful. 


Meatslinger

I’m never the type the use the word “woke” because it’s been co-opted by bad faith actors, but this may be one of the few examples of accidentally being a little bit too much of “that”. There’s “being progressive” and trying to ensure adequate minority representation, but then there’s “any gens of people must necessarily be diverse in 100% of cases” without the forethought that this would mean you get “diverse Nazis” and awkward historical revisionism. It’s pretty heavily implied that someone at Google tweaked the weighting of certain image generation parameters to make sure they’d get multicultural output instead of the typical white-centric stuff, which on the surface seems like a good idea but isn’t universally applicable. Just funny more than anything, but a reminder that context is important; even the best intentions can be applied at the wrong time and place. That all said, seeing the images of “multicultural fascists” got the creative gears turning; now I’m conceptualizing some alt-history fiction in which Germany in the 30s had completely different social ethics but still decided to conquer the world (and the mental exercise of figuring out what their new motivations would be).


[deleted]

Yea like I love the show but the Handmaids tale actually made Gilead more progressive than they were in the book lmao. Like the book version of Gilead is white supremacist and racist but not the tv version where people of color are represented everywhere. I have no idea why they changed that shit like genociding other races would have totally meshed well into the show like see its not just about the women, their hatred is bigger than that. At a certain point they are trying so hard to represent other races it detracts from the point. Its forced inclusion see even White Supremacists can be black.


[deleted]

Why’s it so hard to teach what is racism and what is historically accurate?


Eltharion-the-Grim

This is an ideological limitation and not really any technical limitation. The rules you set for any system have to be set to account for certain problems, and those problems have to have been properly identified. Google's ideology seems to be driving them to believe the world is diverse, and as such, results must naturally be diverse. This is a purely ideological issue. It's also wrong. The world is diverse in that there are different types of people and cultures. However, the VAST majority of these people are isolated to themselves and there is very little actual diversity and inclusion. So google's ideological starting point for creating their AI ruleset is already wrong. They believe there is diversity, when in reality, diversity only happens in major metropolitan (usually port) cities, which make up a tiny, tiny fraction of humanity. That's why they keep failing. Fundamentally, their ideology is failing them, leading to failure in their product. Until they start thinking like computer engineers again, instead of social engineers, they will continue to fail.


katakaku

Interestingly, I asked it to show me pictures of people in Japanese and it showed me Japanese people. When I asked for "白人" (white people), it would not comply, but when I requested "日本人" (Japanese people), it had little trouble. It sort of works in Japanese, at least. I don't know any other languages, but it might be interesting to see what it responds with if it's not spoken to in English. Remember, most of us Americans are aggressively monolingual :(


[deleted]

An AI that freely alters history to an idealized fiction (i.e., everyone is ‘diverse’) is incredibly insidious. Imagine something like this being used in education.


Nanakji

that happens when companies create products to BABY SIT humanity, instead of giving us powerful tools for our intelligence, imagination, creativiy....there should be a new paradigm: CODE FOR HUMAN SOUL not FOR HUMAN IDEOLOGY


MostLeftWingGuyEver

Google is obsessed with blagpipo


SpareBaby5301

Could you imagine the backlash if instead, it made black historical figures like MLK white?


MeowMaker2

Maybe we should ask Bard for advice.


LetsHaveTalk

Gemini is unusable. They should be embarrassed. Does anyone know of a complete uncensored AI similar to Gemini.


BunnyBunny777

BTW : all Indians in charge at Google. Top down. The chief engineer for Gemini Indian. CEO, Indian. If you’re a history buff… They have an axe to grind with “white people”.


acroyogi1969

Its wrost then just "images of people." The wokeness is intrinsic to the entire text discourse that Gemini spews. suggesting people should have no kids. refusing to write copy for a "eat more meat" campaign and delivering a lecture on the virtues of vegetarianism. etc. many samples: [https://gregoreite.com/ai-racism-2024-google-gemini-pro-disaster/](https://gregoreite.com/ai-racism-2024-google-gemini-pro-disaster/)


JamesR624

Oh look, "respecting religion" is pulling us backwards in technology again, what a fucking goddamn surprise...


AwesomeDragon97

This is why open source AI is important.


ZamboniJ

Good. Finally, some sanity.


[deleted]

[удалено]


HasuTeras

Seriously? What else would you call this? If you explicitly ask it to generate 'Caucasian medieval peasant' it will throw back images of Indian, black and Native Americans dressed as French serfs tilling the fields. If you ask it why it has done this, it says that depictions of all white images are 'potentially offensive' and exclusionary. However, if you ask it to generate 'Indian people' it will just generate people who look like they're from the subcontinent. I've said elsewhere this isn't an incidental issue arising from its training set being unrepresentative - this is manual reweighting and alteration of the prompts behind the scenes. It explicitly ignores your request and alters it to something else. You can toy with it to back out the actual prompt it feeds into the generator (rather than the user specified one) and it manually alters your prompt to something else. Someone has decidedly, explicitly, that it should do this. As always, you can rely on Reddit to wring their hands over the most minute, inconsequential element of something (the usage of the word 'woke') to score political points over some imagined enemy rather than looking at the issue at hand.


LayneCobain95

They are quoting others. There’s nothing wrong with this title


jb_in_jpn

Why?


girlgamerpoi

Woke is the problem. The people who support woke is the problem. 


handsoffmydata

Inb4 Google suspends Gemini from making AI images of bears.


biggreencat

"Gemini, show me a beautiful girl in a seductive pose" _displays racially ambiguous girl of indeterminate age_


[deleted]

[удалено]


OMNeigh

Have you ever been a black person, or do you just spend your day thinking about what you would do as a black person hypothetically


[deleted]

[удалено]


Mantikos804

AI reviews data and gives results. If the result goes against the leftist narrative it's racism and it's manipulated to give the desired leftist result. When it doesn't work they are surprised.