Social AI generated Taylor Swift Images spark outrage

1) yes, you can find multiple examples of serious legal issues popping up in this novel technology and they are growing in number and scope. Which is why I predicted it will be regulated with specific laws in the near future.

2) ease of use does have a bearing. If companies are creating tools designed for anyone to be able to create these fake but indiscernible images with a few keystrokes, they will bear legal responsibility along with the person using it.

Defamation has been addressed before with laws, this is a new frontier of it that will take on new laws fighting it. Whining about celebs is erroneous and deflecting. You don't have to be famous to have this pose a threat. And folks who make music or movies still have rights.

3) If it's too difficult for prosecutors to prove elements instead of just giving up they will likely make companies make that process easier for investigation.

4) The amount of people who could create that Scarlett AI ad without AI programs is incredibly small, and thus not a big threat to the general public. The amount of people who can create a believable fake Scarlett video with AI now is pretty much anyone who can type a couple prompts. Thus an exponentially larger threat with a different response.
1. These issues are already addressed under existing law. Stealing a likeness for brand advertisement has already been addressed. Parody, lookalike and photoshopped porn has also already been addressed. There is no new legal frontier to be found here.

2. This is just flat out wrong, legally speaking. If a company creates products and/or services with many legal uses, and a user abuses that product or service for illegal uses, the company is not responsible at all. If I buy a new set of kitchen knives and use one of them to stab my wife, the knife company can not be held criminally or civilly responsible. There is a caveat to this that the illegal uses can't be advertised by the company. So as long as the knife company doesn't explicitly state "our new knives are excellent for stabbing your wife, they won't be culpable. I know this is an extreme example, but the idea is the same. Brands are generally smart enough not to put illegal uses in their advertisements.

Unnumbered paragraph. Defamation has been addressed and doesn't include parody, satire, lookalikes or photoshopped content that doesn't include specific intent to defraud. That all falls under protected free expression, even if people don't like it. AI content should be no different.

3. Prosecutors don't make laws, legislators do. Unless a law is changed, which would likely make it to at least a state Supreme Court level, in depth analysis of proprietary AI deep learning models could be protected under trade secrets. While the legal system in general is generally skewed toward the benefit of prosecutors, specific laws are generally designed to balance protecting individual libraries with three public interest of deleting societal harm, which in all likelihood would not extend to AI generated parody porn absent an intent to defraud.

4. Legally speaking, that doesn't matter. Whether 1 person is capable of committing a crime or opening themselves to civil liability or a million people are capable, the conduct itself is what is illegal. The number of people able to commit the crime has little bearing on who actually does it.

Everybody who has a gun is capable of easily killing someone. That doesn't make guns illegal or gun manufacturers civilian liable for illegal actions committed with their products. More people own guns than own AI software.
 
Last edited:
1. These issues are already addressed under existing law. Stealing a likeness for brand advertisement has already been addressed. Parody, lookalike and photoshopped porn has also already been addressed. There is no new legal frontier to be found here.

2. This is just flat out wrong, legally speaking. If a company creates products and/or services with many legal uses, and a user abuses that product or service for illegal uses, the company is not responsible at all. If I buy a new set of kitchen knives and use one of them to stab my wife, the knife company can not be held criminally or civilly responsible. There is a caveat to this that the illegal uses can't be advertised by the company. So as long as the knife company doesn't explicitly state "our new knives are excellent for stabbing your wife, they won't be culpable. I know this is an extreme example, but the idea is the same. Brand are generally smart enough not to put illegal uses in their advertisements.

Unnumbered paragraph. Defamation has been addressed and doesn't include parody, satire, lookalikes or photoshopped content that doesn't include specific intent to defraud. That all falls under protected free expression, even if people don't like it. AI content should be no different.

3. Prosecutors don't make laws, legislators do. Unless a law is changed, which would likely make it to at least a state Supreme Court level, in depth analysis of proprietary AI deep learning models could be protected under trade secrets. While the legal system in general is generally skewed toward the benefit of prosecutors, specific laws are generally designed to balance protecting individual libraries with three public interest of deleting societal harm, which in all likelihood would not extend to AI generated parody porn absent an intent to defraud.

4. Legally speaking, that doesn't matter. Whether 1 person is capable of committing a crime or opening themselves to civil liability or a million people are capable, the conduct itself is what is illegal. The number of people able to commit the crime has little bearing on who actually does it.

Everybody who has a gun is capable of easily killing someone. That doesn't make guns illegal or gun manufacturers civilian liable for illegal actions committed with their products. More people own guns than own AI software.
1. AI programs will make this too easy to get away with and thus will be regulated.

2. This is not wrong. Napster was ordered to halt operations by a federal judge because it was facilitating pirating by a few clicks of a button to the masses.

3. Legislators regulate industries and make laws that allow them to be investigated.

4. Some weapons can kill a few people. Other weapons are of a completely different threat level and much more regulated. This was not a good example for you to pick.
 
1. AI programs will make this too easy to get away with and thus will be regulated.

2. This is not wrong. Napster was ordered to halt operations by a federal judge because it was facilitating pirating by a few clicks of a button to the masses.

3. Legislators regulate industries and make laws that allow them to be investigated.

4. Some weapons can kill a few people. Other weapons are of a completely different threat level and much more regulated. This was not a good example for you to pick.
1. That doesn't matter at all with regards to legality.

2. And limewire was there two months later facilitating pirating by a few clicks of a button to the masses. Only now with a pop-up warning reminding customers that piracy was illegal. Peer-to-peer file transfers are still around today.

3. There's nothing to investigate. There are thousands of legal uses for AI. If someone is abusing it for illegal purposes, that's on the end user, not the company or industry.

4. It was a perfect example, but here's another similar one: a shovel manufacturer advertizes the durability of a new shovel model and sharp edge for digging. A customer uses the sharp edge to decapitate his wife, then uses the shovel to bury the body to hide the evidence. The shovel company is not responsible.

It's also important to note that neither shovels or AI software are considered weapons.
 
Last edited:
1. That doesn't matter at all with regards to legality.

2. And limewire was there two months later facilitating pirating by a few clicks of a button to the masses. Only now with a pop-up warning reminding customers that piracy was illegal. Peer-to-peer file transfers are still around today.

3. There's nothing to investigate. There are thousands of legal uses for AI. If someone is abusing it for illegal purposes, that's on the end user, not the company or industry.

4. It was a perfect example, but here's another similar one: a shovel manufacturer advertizes the durability of a new shovel model and sharp edge for digging. A customer uses the sharp edge to decapitate his wife, then uses the shovel to bury the body to hide the evidence. The shovel company is not responsible.

It's also important to note that neither shovels or AI software are considered weapons.

1) Yes it does. See napster/limewire.

2) and then using the legal precedent set by napster Limewire was slapped with $100 million lawsuit, shutdown, and disabled older versions. That's really the example you went with?

3) if the company is filling it's data banks with copyrighted images and then easily facilitating copying those into indiscernible fake videos that include nudity/crimes and claiming an intelligence they own created these images then yes they share responsibility for them.

4) Ok let's use a garden tool as an example, though it's dumb and doesn't really apply to media/technology. The same company makes a high tech shovel that anyone, even a child, can use to inflict harm on a large number of people from familiars to political enemies with a few button strokes from any where and anyone with access to the internet can get one. It would be regulated more than the normal shovel.
 
1) Yes it does. See napster/limewire.

2) and then using the legal precedent set by napster Limewire was slapped with $100 million lawsuit, shutdown, and disabled older versions. That's really the example you went with?

3) if the company is filling it's data banks with copyrighted images and then easily facilitating copying those into indiscernible fake videos that include nudity/crimes and claiming an intelligence they own created these images then yes they share responsibility for them.

4) Ok let's use a garden tool as an example, though it's dumb and doesn't really apply to media/technology. The same company makes a high tech shovel that anyone, even a child, can use to inflict harm on a large number of people from familiars to political enemies with a few button strokes from any where and anyone with access to the internet can get one. It would be regulated more than the normal shovel.
1. Illegal activities with relation to artificial media creation are almost exclusively intent based. It doesn't matter how the content is made, just whether or not it violates the law.

How easy it is to create the media also doesn't matter. A skilled porn photoshopper passing off his content as authentic is breaking the law. An end user using AI software to make parody porn of a celebrity but is not claiming it as authentic is not. No matter whose media looks more real, only one of them is committing a crime under this scenario.

2. Piracy and originally created content are not similar in this scenario due to the theft of intellectual property being an implicit factor in media piracy. You were the one who brought up Napster, which is irrelevant to the subject at hand, AI generated explicit content.

3. Public figures likely have thousands of pictures to source from that wouldn't fall under any sort of copyright. Fair use laws allow the use of even copyrighted material to be used for the purposes of commentary, satire, parody and many other uses as long as they aren't just directly copying the original media with extremely little to no new contribution. Pornographic parody would obviously be protected under fair use as the original content is significantly altered.

4. You are literally using an unrelated example of something causing real life physical harm. AI generated content has thousands of legal uses. Even illegal uses of AI don't cause any actual physical harm.

Laws can be made to protect people from real world physical harm. Not to protect celebrities and their fans from being offended by free expressions they find offensive.

Something being in poor taste, doesn't make it illegal, nor should it. There are millions of offensive things said and created every day on the Internet. We don't need new laws to address each and every one of them. Celebrity worshippers clutching their pearls over parody porn need to take their emotions out of the equation and think about the broader implications of restricting free expression.

If you don't personally like parody porn, you don't have to watch it or create it. Telling everyone else what they can and can't do that has fallen under free expression for over a half century is tyrannical.
 
Last edited:
Haters with TDS will say it is fake, but they are derranged!


This is quite obviously satire and can be done with video editing software very easily.

People stupid enough to believe that this is authentic should not be the reason everyone else's rights to expression get curtailed.

The government's proper role is not to babysit the public or act as arbiters of truth.
 
Last edited:
I didn't miss the point.

The ridiculous example you tried to compare AI to may have the similar intent but, as you described in detail, there are a ton of other completely different factors at play that need to be addressed.

The ability of some photoshop expert to print a stack of photos next to a bus stop does not pose remotely the same level of threat as an AI program that anyone over the age of 6 can download with a few clicks and create convincing fake videos of sex acts or crimes or other damaging content and distribute them to the entire world where they will exist forever with less effort then it took to write this post.

Like, sure if we pretend AI just printed stacks of photos and the internet is just a bus stop somewhere you have a point but that's a lot of pretending.
Again, you ignore the entire point of "intent".
 
Back
Top