Here’s What To Do If You’re Deepfaked On YouTube

Artificial intelligence is really remarkable in so many ways, especially when it comes imitating someone. It’s not very hard to simulate someone’s image, likeness, and even voice and music these days, and in most cases that’s a really problem. It’s true that imitation is a form of flattery, but not if it’s causing you reputational harm and/or taking your royalties. That said, most social platforms now recognize that this is a issue, and YouTube now has a policy for exactly what to do when you’re deepfaked.

Deepfaked

While it’s one thing to train on content that you own, it’s quite another to try to replicate the sound of your music or the voice of a singer. If you find a video on YouTube that uses Ai to replicate you without your consent, what to do is fairly straightforward.

Where To Start

Begin the Privacy Complaint Process by starting here. Be sure to follow the steps all the way through.

According to YouTube, “When evaluating these requests, we’ll consider a variety of factors before removal, like whether the content is altered or synthetic and could be mistaken for real, whether the person making the request is identifiable, or whether the content is parody or satire when it involves well-known figures.”

YouTube goes on to say in its Privacy Guidelines:

  • Report AI-generated or other synthetic content that looks or sounds like you: If someone has used AI to alter or create synthetic content that looks or sounds like you, you can ask for it to be removed. In order to qualify for removal, the content should depict a realistic altered or synthetic version of your likeness. We will consider a variety of factors when evaluating the complaint, such as:
    • Whether the content is altered or synthetic
    • Whether the content is disclosed to viewers as altered or synthetic
    • Whether the person can be uniquely identified
    • Whether the content is realistic
    • Whether the content contains parody, satire or other public interest value
    • Whether the content features a public figure or well-known individual engaging in a sensitive behavior such as criminal activity, violence, or endorsing a product or political candidate”

Reading Between The Lines

What all this means is that just because you filed a complaint, YouTube is not compelled to take action. It also doesn’t indicate how long it will take to hear back about any pending measures, something that was dishearteningly long in the past.

The real problem here is that it’s unclear how YouTube will actually know if the video you reported is actually deepfaked. There’s no sure technology available for this yet, although the platform does indicate that it’s working on one. Plus there’s no specific federal law regulating AI deepfakes yet, and AI copyright is in such a nebulous state that it can’t yet be relied upon to provide clear direction either.

That means there’s no guarantee that you’ll get the result that you wanted by reporting deepfaked content, but at least YouTube realizes there’s a problem and gives you a place to start.


Spread the word!

Click Here to Leave a Comment Below

Leave a Reply: