Social Google Gemini refuses to show pictures of white people.

I'm not sure what the point of this cut and paste is.
Transformers are software architecture, employing an encoder-decoder design pattern, it's not a language like Python or a framework like Pytorch.
The point of describing how transformers work is that it's not just a bunch of preprogrammed responses matched to conditionals (if, then statements).
Duh, do you think the picture program prerecorded black people in chains eating watermelon?

That’s why they shut down the artistic side. It’s the chat bot that is woke
 
Can you explain what you think Gemini does to respond to a question?

You're wasting your time, he's Joaquin Phoenix's Joker.

And yeah, his constant laughter is a nervous tick, his insanity isn't a super power.
 
Can you explain why it generates pro-pedo answers?

A generative model like Gemini is trained on 100s of gigabytes of text (usually scraped from the internet). It takes a question (a "prompt") and encodes it into a format that the model can understand then model takes the encoded prompt one word (or "token") at a time and tries to predict what the next token (or "word") should be using backprogation which eventually results in full sentences and paragraphs. That's why it's called generative because it's generates data.

No person at Google handcrafts any response unless it's a guardrail. Otherwise the responses is influenced by the models training (it's data) and the prompt itself. For instance, you can coerce a model to give a pro pedophilia response based on your prompts unless there are strong guardrails that detect what you're doing and actively tries tries to prevent the model from responding in that way.
 
No person at Google handcrafts any response unless it's a guardrail. Otherwise the responses is influenced by the models training (it's data) and the prompt itself. For instance, you can coerce a model to give a pro pedophilia response based on your prompts unless there are strong guardrails that detect what you're doing and actively tries tries to prevent the model from responding in that way.
right.
that's why it sounds exactly like every blue-haired baboon screeching about systemic oppression. everything i've seen these days indicated that whatever it's learning from, these "guardrails" are more than "don't show people how to build a bomb" and more ideological precepts - force diversity in everything, etc.

it's an epic fail from the google ideologues. may they get what they deserve.
 
right.
that's why it sounds exactly like every blue-haired baboon screeching about systemic oppression. everything i've seen these days indicated that whatever it's learning from, these "guardrails" are more than "don't show people how to build a bomb" and more ideological precepts - force diversity in everything, etc.

it's an epic fail from the google ideologues. may they get what they deserve.

Have you considered the possibility that you're a right wing nutjob and that anything that isn't perfectly aligned with right wing nuttery sounds like "blue baboon screeching" to you?
 
A generative model like Gemini is trained on 100s of gigabytes of text (usually scraped from the internet). It takes a question (a "prompt") and encodes it into a format that the model can understand then model takes the encoded prompt one word (or "token") at a time and tries to predict what the next token (or "word") should be using backprogation which eventually results in full sentences and paragraphs. That's why it's called generative because it's generates data.

No person at Google handcrafts any response unless it's a guardrail. Otherwise the responses is influenced by the models training (it's data) and the prompt itself. For instance, you can coerce a model to give a pro pedophilia response based on your prompts unless there are strong guardrails that detect what you're doing and actively tries tries to prevent the model from responding in that way.
Google handcrafts the responses by finetuning on the raw model. For starters, this is how you get the AI to not to tell you to kill yourself. But it's also how you steer the style of the responses. Googles finetuning is the reason for Gemini being so obsessed with "nuance" on every topic, which leads it into terrible answers on controversial topics...
 
Have you considered the possibility that you're a right wing nutjob and that anything that isn't perfectly aligned with right wing nuttery sounds like "blue baboon screeching" to you?
No, because i'd be wrong :).
 
Google handcrafts the responses by finetuning on the raw model. For starters, this is how you get the AI to not to tell you to kill yourself. But it's also how you steer the style of the responses. Googles finetuning is the reason for Gemini being so obsessed with "nuance" on every topic, which leads it into terrible answers on controversial topics...

That's a guardrail which I covered.

The absence of a guardrail does not mean that Google endorses what the model spits out and it doesn't mean that they control exactly what it generates. Every model has a disclaimer stating as much.

Also, guardrails can be circumvented.
 
How does it not make sense? Your response is a complete non sequitur to what I said. Corporations must grow, infinitely. White people are the majority, but they're not everyone. So corporations now market to everyone in an attempt to expand their customer base so they can continue to grow. There's no fucking conspiracy. It's money. It's always money

Business 101; if you piss off your largest consumer base, your company is not only not going to grow, it's going to go bankrupt
 
That's a guardrail which I covered.

The absence of a guardrail does not mean that Google endorses what the model spits out and it doesn't mean that they control exactly what it generates. Every model has a disclaimer stating as much.

Also, guardrails can be circumvented.
The thing is, the entire model is guardrailed, every prompt and every answer, because it has been finetuned.

What you are talking about is the conversational scoping of the model ie. how you get it to not answer certain questions. Which is needed on top of finetuning the model. You need both.

The problem isn't purely in the training of the base model and the data set. But in Google's terrible attempt to steer the model into "woke" answers. Which, given that LLM's are unpredictable, generated some hilarious examples.
 
Business 101; if you piss off your largest consumer base, your company is not only not going to grow, it's going to go bankrupt
The normal and non insane non culture warrior brained people working in the marketing departments of multinational corporations don't believe that making an ad that appeals to minorities is somehow an attack on non minorities. That isn't a normal way to think. "This commercial is appealing to gay people?!? STRAIGHT PEOPLE ARE UNDER ATTACK!!" is a completely deranged and brain diseased way to think.
 
The thing is, the entire model is guardrailed, every prompt and every answer, because it has been finetuned.

What you are talking about is the conversational scoping of the model ie. how you get it to not answer certain questions. Which is needed on top of finetuning the model. You need both.

The problem isn't purely in the training of the base model and the data set. But in Google's terrible attempt to steer the model into "woke" answers. Which, given that LLM's are unpredictable, generated some hilarious examples.

You seem to be talking about the images.

But that's not what I'm talking about. I'm talking about two specific responses from the model regarding pedophilia and the claim made by several people here that Google "preprogrammed" it to be "pro-pedophilia".

There is no question that Google is intervening in some manner to make images of people more diverse. I acknowledged that in a previous reply.
 
You seem to be talking about the images.

But that's not what I'm talking about. I'm talking about two specific responses from the model regarding pedophilia and the claim made by several people here that Google "preprogrammed" it to be "pro-pedophilia".

There is no question that Google is intervening in some manner to make images of people more diverse. I acknowledged that in a previous reply.
There is consistency and correlation with other "woke" topic and answers. It gives some of the blanket responses to some of the questions I have posted. How is this correlated with NLP modules training. It uses some of the same key terms as M-A-P-S in answers with consistency. The software itself is still in different generational languages, where can one throw in the wrench.
 
we got another one of those "you don't know how this works" dudes around here that somehow never touch on how "it worked" itself into pro-pedo replies.
you're pathetic.
I don't know how to make a nuclear bomb but that doesn't mean I don't know right from wrong and have some kind of voice with the use of one. Never got that type of thinking ..

"It's a woman's choice."

"Men have been on the front lines 99.9 of all wars so they should only make war policies."

ect ..
 
I have a pretty funny insight into this that I’ll post up later
OK here we go. I went to have a sit in on a “intervention” with a VP who tried to insist that he did not want to run his team with the prescribed diversity ideals of the company.

In his mind, how he wanted to build it up was for employees of different Socio-economic backgrounds and experiences rather than racial indicators. This was a team of product managers, so in my mind, the product manager was doing the right thing, and that he was trying to bring people of different experiences to create products which could best represent his customers.

While this makes logical sense, the diversity and inclusion quotas that we had were not based upon those factors, and were based upon self identification through the application process.

My guess is that there were at least a handful of product managers who wanted to be as inclusive as possible in the strictest of terms, and had the developers, ensure that any reference to any kind of under represented group, could not be shown in a negative way.

It’s unlikely that the VPs would have been aware of this, as it is most likely under the guise of inclusion without specifics.

The thing that makes this funny is that anyone who was in a QC role and was running test Scripts on this new product would not have been from a background which would have tested the things that the population is testing right now

In other words, if the VP got his way, and took individuals from different Socio-economic and political backgrounds, it’s likely that the QC team or the product team would’ve caught this before it became a big error. However, when you only want diversity of immutable traits, you don’t get diversity of thought necessarily, this must be such a headache for the leaders of this group, especially the technical leaders, who have a truly amazing technical product, but are being killed by ideological product teams

I can’t imagine some of the research scientists who created amazing algorithms only to see the product team ham fist these kinds of things into the product so when anyone looks at Gemini, all they see is WokeBOT.
 
You seem to be talking about the images.

But that's not what I'm talking about. I'm talking about two specific responses from the model regarding pedophilia and the claim made by several people here that Google "preprogrammed" it to be "pro-pedophilia".

There is no question that Google is intervening in some manner to make images of people more diverse. I acknowledged that in a previous reply.
That's true, I don't think it was ever intented for the model to be pro-pedophilia. In fact it seems to be a failure that the model didn't pick up on this being an extremely sensitive topic to begin with and shut the conversation down. But I do believe it's still the result of Google "handcrafting".

Research shows that finetuning, RLHF etc. can make LLM's worse.

Gemini forces diversity in their image generation trough bruteforcing prompts, not very elegant and we saw the results...

It's another failure with the conversational model, which has been tuned to return results in a certain style, length, tonality etc. What I've observed is the extreme "bothsideism" it's forced into on basically any topic (not involving white people). Basically, it tries to give you arguments from all sides on a topic. Which isn't really appropriate when it comes to pedophilia...
 
That's true, I don't think it was ever intented for the model to be pro-pedophilia. In fact it seems to be a failure that the model didn't pick up on this being an extremely sensitive topic to begin with and shut the conversation down. But I do believe it's still the result of Google "handcrafting".

Research shows that finetuning, RLHF etc. can make LLM's worse.

Gemini forces diversity in their image generation trough bruteforcing prompts, not very elegant and we saw the results...

It's another failure with the conversational model, which has been tuned to return results in a certain style, length, tonality etc. What I've observed is the extreme "bothsideism" it's forced into on basically any topic (not involving white people). Basically, it tries to give you arguments from all sides on a topic. Which isn't really appropriate when it comes to pedophilia...

Thinking there should be better guardrails around a topic like pedophilia is a legitimate position to hold.

But the lack of guardrails doesn't imply malicious intent. They have to try to think of every degenerate thing a person might ask the model and then decide to completely short circuit the model's output with a canned response or coerce the model into responding in a certain way. Obviously, things will be missed. In the early days of language models, it was very easy to get a model to agree with Hitler. That didn't mean that the engineers designed the model to be a Nazi. It just means that they hadn't put enough effort into stopping it from sounding like a Nazi. You can make an uncensored language model say almost anything.

And yes, they can train the model to try to present "both sides" of a position as that's an easy cop out for handling political charged topics without having to completely avoid anything that might even be remotely contentious (which is almost everything in a highly polarized society). The pedophilia response could very well be the result of that.

All of that is reasonable. Thinking a trillion dollar company intentionally made its AI like pedophiles is not.
 
Last edited:
Woman is a gender identity. Most commonly held (but not limited to) adult females, and is associated with certain traits and behaviours that can vary depending on the culture. In American (and many westernised cultures), identifying as and behaving as a woman, is generally associated with things like femininity, child-raising, emotional sensitivity, etc. However, people can identify as a woman without adhering to specific traits because how someone chooses to express their identity can vary from person to person.

R.ced7c56ee360d235c3c1e043d2286438


<Lmaoo>
 
Google handcrafts the responses by finetuning on the raw model. For starters, this is how you get the AI to not to tell you to kill yourself. But it's also how you steer the style of the responses. Googles finetuning is the reason for Gemini being so obsessed with "nuance" on every topic, which leads it into terrible answers on controversial topics...

Yeah, it's narrow AI. Coders set parameters to keep social constraints on it. It's done via patch in work flow. A team definitely tried to over compensate a diversity or inclusive command and the AI protocol couldn't distinguish any nuance. Pretty lazy and sloppy work, so much that they had to known this would happen and let it go for laughs.
 
Back
Top