Google has prohibited deepfake AI training on Google Colaboratory. As reported by BleepingComputer observed the modified terms of use over the weekend.
Colab evolved out of a Google Research initiative in late 2017. It allows anybody to create and execute Python code using a web browser, especially for machine learning, teaching, and data analysis. Google gives free and paid Colab users to access to GPUs and AI-accelerating tensor processing units.
Within the AI research community, Colab has become the de-facto platform for demos in recent years. Researchers who’ve built code often post links to Colab sites in GitHub repositories. Google hasn’t been extremely rigorous with Colab material, which might open the way for unscrupulous individuals.
Last week, DeepFaceLab users received an error notice while trying to execute the program in Colab. “You may be running forbidden code, which may limit your use of Colab in the future.”
The warning isn’t always triggered. This reporter ran one of the popular deepfake Colab experiments without difficulty, and Reddit users said FaceSwap is still working. This means enforcement is blacklist-based rather than keyword-based, and the Colab community must report violating code.
We monitor Colab for abuses that go against Google’s AI standards while supporting our objective to offer people access to TPUs and GPUs. Deepfakes were added to the list of activities forbidden from Colab runtimes last month, according to a Google spokesman. “Deterring misuse is a dynamic game, and we can’t publish particular ways since counterparties may use them to bypass detection systems. We have automatic mechanisms that identify and prevent misuse.”
Archive.org reveals Google altered Colab agreements in mid-May. Previous prohibitions on denial-of-service assaults, password cracking, and torrenting were kept unaltered.
Deepfakes come in various forms, but one of the most prevalent is a person’s face overlaid on top of another. AI-generated deepfakes can replicate body motions, microexpressions, and skin tones better than Hollywood CGI in some circumstances.
Viral videos prove that deepfakes may be innocuous and enjoyable. But hackers exploit them to extort and scam social media users. They’ve also been used for political propaganda, such as fabricating films of Ukrainian President Volodymyr Zelenskyy delivering a speech regarding the country’s civil war.
One source says the number of deepfakes online climbed from 14,000 to 145,000 in 2019-2021. According to Forrester Research, deepfake fraud scams will cost $250 million by 2020.
When it comes to deepfakes, the most crucial problem is the dual usage, how it can be a boon and bane at the same time. There are no industry-wide consistent ethical practices regarding machine learning and AI, but it makes sense for Google to come up with its own set of conventions regulating access to and ability to create deepfakes, especially since they’re often used to disinform and spread fake news, which is bad and getting worse problem.
Os Keyes, a Seattle University adjunct professor, backed Google’s choice to restrict deepfake projects from Colab. But he said more must be done to prevent their formation and spread.