YouTube unveils new tools to safeguard creators against AI-generated deepfakes
YouTube is developing tools to detect the use of AI-generated versions of users’ voices or physical likenesses in videos without their consent. In an announcement today, the Google-owned platform said these tools will protect creators on its platform. The announcement is a welcome development as the tools address major concerns around the proliferation of deepfakes, […]
YouTube is developing tools to detect the use of AI-generated versions of users’ voices or physical likenesses in videos without their consent. In an announcement today, the Google-owned platform said these tools will protect creators on its platform.
The announcement is a welcome development as the tools address major concerns around the proliferation of deepfakes, sometimes leading to financial and reputational losses. Many hope it will make the process of identifying and reporting deepfakes more seamless.
Two tools focus on protecting musicians and public figures
The company explained that it is working on two tools. First is a singing-voice identification tool that would be part of the YouTube Content ID system. This feature automatically detects AI-generated content simulating the singing voices of other people on the platform and automatically removes them.
For the second tool, YouTube is developing a technology that enables popular people such as creators, athletes, musicians, and artists to detect and remove deepfakes of their faces on the platform. The company noted that these tools, along with its Privacy Updates and Terms of Service, would be enough to safeguard creators.
However, YouTube’s goal is not to stop people from using AI on its platform; it only seeks to ensure the responsible use of AI and empower creators to control the use of their likeness.
It said:
“We believe AI should enhance human creativity, not replace it. We’re committed to working with our partners to ensure future advancements amplify their voices, and we’ll continue to develop guardrails to address concerns and achieve our common goals.”
Meanwhile, YouTube did not give a timeline for when these features would be available for users, only noting that the singing-voice detection tool still needs some fine-tuning, but there would be a pilot program by early 2025.
YouTube teases technology to prevent AI data scraping
YouTube also hinted that it is working on a tool to prevent AI companies from using content on its platform to train their models without the owners’ permission. Observing this as a major issue, the platform said it is investing in systems that will detect and block third parties from scraping data from its platform and violating its Terms of Service.
However, it acknowledged that creators might want to collaborate with AI companies. Therefore, it is working on ways to allow YouTube creators to control how third parties use their content on the platform.
Despite YouTube’s tough stance against AI companies scraping its content to train their models, the platform and its parent company, Google, use this content for the same purpose, but in compliance with the terms of services. It plans to keep using the content to develop AI tools responsibly.
What's Your Reaction?