Social AI generated Taylor Swift Images spark outrage

That cracks me up almost as much as people being sucked into defending Russian propaganda as the way foreigners really view the US, just to pretend to themselves it pwns the libz.
Huh?
 
They're not killing themselves directly over the image it's the bullying and tormenting it would cause that would do the trick.
weak ass parents

tell the kid to ignore it, get off social media

then go after the dad. wait, they probably don't have one, single mother, see it clearly now?
 
weak ass parents

tell the kid to ignore it, get off social media

then go after the dad. wait, they probably don't have one, single mother, see it clearly now?

Dunno if you have kids but in 2024 even if you tell them to get off social media it doesn't get every other kid off social media.
 
But the creators don't make affirmative statement regarding it. That's not the reality of the internet or how this images reach the public. They go viral, all the reposts aren't all citing who originally distributed it. Unless you are arguing the creators need to legally be prompted to claim ownership and intent..then ok that might work.

The number of people who view a fake picture they are led to believe could be real directly impacts how damaging it can be to the person it's faking reputation, mental health, or safety. So obviously that matters. When these images are put on the internet there is no excuse "I didn't there was a risk a lot of people could see them".

AI can, with extreme ease, create images the public reasonably mistakes for real. So this one example won't account for new laws but the overall threat will.
If there is no evidence that the creator claimed the pictures as genuine, there is no crime.

As far as forcing creators to identify themselves and declare intention upfront, I would strongly disagree with a measure like that.

This country was founded on principles of anonymous communication. The federalist papers, which lead to many of the articles and amendments of the Constitution, are one of our countries most fundamental pillars of liberty and they were all anonymous communication.

Maybe people just shouldn't assume every picture or video they see is real. Not everything on the television or Internet is reality. That doesn't mean it should be a crime.
 
If there is no evidence that the creator claimed the pictures as genuine, there is no crime.

As far as forcing creators to identify themselves and declare intention upfront, I would strongly disagree with a measure like that.

This country was founded on principles of anonymous communication. The federalist papers, which lead to many of the articles and amendments of the Constitution, are one of our countries most fundamental pillars of liberty and they were all anonymous communication.

Maybe people just shouldn't assume every picture or video they see is real. Not everything on the television or Internet is reality. That doesn't mean it should be a crime.

You keep repeating this scenario where the AI image creators make claims regarding if an image is genuine or not. Where AI images go viral with their creators personal signature and explanations for whether it's real or watermarks that say "fake". They don't, that's the point. You're making up a scenario that doesn't exist in reality as a solution. So it's not a solution.

Maybe people should just let go of the assumption they are entitled to free us of AI tools regardless of the threat they pose to people.
 
You keep repeating this scenario where the AI image creators make claims regarding if an image is genuine or not. Where AI images go viral with their creators personal signature and explanations for whether it's real or watermarks that say "fake". They don't, that's the point. You're making up a scenario that doesn't exist in reality as a solution. So it's not a solution.

Maybe people should just let go of the assumption they are entitled to free us of AI tools regardless of the threat they pose to people.
I agree. My scenario isn't a problem in the real world. There isn't this widespread issue of people claiming that AI generated content is actually real, which is precisely why no solution is necessary, because there is no problem.

As far as threats go, if someone is using AI to communicate threats, there are already laws on the books for that. But that's not what's going on here.

What's going on here is celebrity worshippers trying to get new laws passed because they feel their idols are being disrespected.

Disrespect is a form of free expression. You don't have to like it, but it's still constitutionally protected and requires no new laws to address it.
 
I agree. My scenario isn't a problem in the real world. There isn't this widespread issue of people claiming that AI generated content is actually real, which is precisely why no solution is necessary, because there is no problem.

As far as threats go, if someone is using AI to communicate threats, there are already laws on the books for that. But that's not what's going on here.

What's going on here is celebrity worshippers trying to get new laws passed because they feel their idols are being disrespected.

Disrespect is a form of free expression. You don't have to like it, but it's still constitutionally protected and requires no new laws to address it.

The problem is real, your proposed solution isn't a real solution.

Celebs have sued AI over their image and likeness being faked in videos already. Last time I checked Scarlett is a real person in the real world, right?

"https://www.theguardian.com/film/2023/nov/01/scarlett-johansson-artificial-intelligence-ad"
 
The problem is real, your proposed solution isn't a real solution.

Celebs have sued AI over their image and likeness being faked in videos already. Last time I checked Scarlett is a real person in the real world, right?

"https://www.theguardian.com/film/2023/nov/01/scarlett-johansson-artificial-intelligence-ad"
Did you actually read the article you sent?

This company was using her likeness to commit fraud. They were pretending that she was a spokesperson for their brand without her signing a contract with them.

This is already covered under existing fraud and likeness rights laws. Once again, no new laws are necessary.

I have constantly stated in this thread that intent is key, then you grab an example where the intent is clearly to defraud. She is suing under already existing civil law.

You're kind of making my case for me.
 
Last edited:
Did you actually read the article you sent?

This company was using her likeness to commit fraud. They were pretending that she was a spokesperson for their brand without her signing a contract with them.

This is already covered under existing fraud and likeness rights laws. Once again, no new laws are necessary.

I have constantly stated in this thread that intent is key, then you grab an example where the intent is clearly to defraud. She is suing under already existing civil law.

You're kind of making my case for me.

Yes I read it after I read your post where you said people creating AI content that passed as real wasn't an issue. Then I showed you specific example of it being an issue.

Any user, even a young child, could with shocking ease very convincingly use her likeness for anything with that program, including things that could directly harm her reputation, mental, and personal safety. This is nothing like owning photoshop.

You have stated intent is key then admitted it's near impossible to determine it when these images are going viral and made by randoms on the internet.

All things that will necessitate an special approach. Just like social media platforms are getting their own laws after a period, so will AI.
 
1. Yes I read it after I read your post where you said people creating AI content that passed as real wasn't an issue. Then I showed you specific example of it being an issue.

2. Any user, even a young child, could with shocking ease very convincingly use her likeness for anything with that program, including things that could directly harm her reputation, mental, and personal safety. This is nothing like owning photoshop.

3. You have stated intent is key then admitted it's near impossible to determine it when these images are going viral and made by randoms on the internet.

4. All things that will necessitate an special approach. Just like social media platforms are getting their own laws after a period, so will AI.
(I added numbers to the quote in order to address each point individually. I have not otherwise made any changes. I haven't learned how to break up posts into paragraphs in the new system yet.)

1. You left out the operative phrasing: a widespread issue. I am sure you can find a handful of examples affecting the tiniest sliver of the population, likely less than .0000001% of people will have their likeness abused in this manner. Their options for legal remedy already exist.

2. Ease of use has no bearing on legality. Fake pictures don't harm reputations unless they are believed to be real. And what is this "mental safety" garbage? A nations civil liberties are far more important than a celebrities fee-fees getting hurt.

3. It's on a prosecutor to prove the elements of a crime. If there is no evidence of illegal intent, which can be logically deduced by the circumstances and uses of the content (as can be easily done in your Scarlet example) then there is no crime, nor should there be.

4. There is no special approach required. Everything that can be illegally done with AI could have possibly been done with Photoshop or existing video editing software. The laws already on the books are sufficient to handle this new technology. Notice how Scarlet was able to seek remedy under already existing laws. Criminal use of AI or uses that expose the creator to civil liability would be no different. No new solution is necessary. If the laws that are already on the books are enforced, all illegal uses of AI are accounted for.
 
Last edited:
(I added numbers to the quote in order to address each point individually. I have not otherwise made any changes. I haven't learned how to break up posts into paragraphs in the new system yet.)

1. You left out the operative phrasing: a widespread issue. I am sure you can find a handful of examples affecting the tiniest sliver of the population, likely less than .0000001% of people will have their likeness abused in this manner. Their options for legal remedy already exist.

2. Ease of use has no bearing on legality. Fake pictures don't harm reputations unless they are believed to be real. What is this "mental safety" garbage. A nations civil libraries are more important than a celebrities fee-fees getting hurt.

3. It's on a prosecutor to prove the elements of a crime. If there is no evidence of illegal intent, which can be logically deduced by the circumstances and uses of the content (as can be easily done in your Scarlet example) then there is no crime, nor should there be.

4. There is no special approach required. Everything that can be illegally done with AI could have possibly been done with Photoshop or existing video editing software. The laws already on the books are sufficient to handle this new technology. Notice how Scarlet was able to seek remedy under already existing laws. Criminal use of AI or uses that expose the creator to civil liability would be no different. No new solution is necessary. If the laws that are already on the books are enforced, all criminal uses of AI are accounted for.

1) yes, you can find multiple examples of serious legal issues popping up in this novel technology and they are growing in number and scope. Which is why I predicted it will be regulated with specific laws in the near future.

2) ease of use does have a bearing. If companies are creating tools designed for anyone to be able to create these fake but indiscernible images with a few keystrokes, they will bear legal responsibility along with the person using it.

Defamation has been addressed before with laws, this is a new frontier of it that will take on new laws fighting it. Whining about celebs is erroneous and deflecting. You don't have to be famous to have this pose a threat. And folks who make music or movies still have rights.

3) If it's too difficult for prosecutors to prove elements instead of just giving up they will likely make companies make that process easier for investigation.

4) The amount of people who could create that Scarlett AI ad without AI programs is incredibly small, and thus not a big threat to the general public. The amount of people who can create a believable fake Scarlett video with AI now is pretty much anyone who can type a couple prompts. Thus an exponentially larger threat with a different response.
 
It is different. In your scenario you are creating the images, not asking a program created by a company to create them via it's intelligence and then send to you. In this scenario you have sourced the photos to alter, ai they are media the programmers (who likely don't own the rights) gave the program to copy. Also using photoshop takes some skill , investment of resources and time, and is beyond the capabilities of most of the public to create fake sex scenes or videos of people doing crime convincingly, with AI a six year old could type a request into a program and create an image that is indistinguishable from a fake.

It's different.
You completely missed the whole point of "intent". Why is what matters, not how.

Tell me you understand "intent" before we get into anything else?
 
You completely missed the whole point of "intent". Why is what matters, not how.

Tell me you understand "intent" before we get into anything else?

I didn't miss the point.

The ridiculous example you tried to compare AI to may have the similar intent but, as you described in detail, there are a ton of other completely different factors at play that need to be addressed.

The ability of some photoshop expert to print a stack of photos next to a bus stop does not pose remotely the same level of threat as an AI program that anyone over the age of 6 can download with a few clicks and create convincing fake videos of sex acts or crimes or other damaging content and distribute them to the entire world where they will exist forever with less effort then it took to write this post.

Like, sure if we pretend AI just printed stacks of photos and the internet is just a bus stop somewhere you have a point but that's a lot of pretending.
 
Last edited:
Back
Top