Google has unveiled enhancements to its search algorithm aimed at better handling AI-generated deepfakes, motivated by a "concerning increase in generated images and videos that depict individuals in sexually explicit contexts without their consent."
To safeguard individuals, Google has streamlined the process for the removal of non-consensual images. From now on, if a user requests the removal of such an image, Google will also filter out all explicit results in related searches. Additionally, the company will scan the image to eliminate any duplicates across the web.
Furthermore, the Alphabet Inc. (GOOG)-owned company is refining its ranking systems for queries that have a heightened risk of surfacing explicit fake content.
"The updates we’ve implemented this year have reduced exposure to explicit image results in these types of queries by over 70 percent. With these changes, people can read about the impact deepfakes are having on society rather than encounter pages featuring actual non-consensual fake images,” stated Emma Higham, Google Product Manager, in a blog post.
The company also plans to demote websites that have a high number of removal requests for fake explicit imagery.
“These changes are significant updates to our protections in Search, but there’s more work to be done to address this issue. We will continue developing new solutions to help those affected by this content,” Higham added.
"Given that this challenge extends beyond search engines, we will continue to invest in industry-wide partnerships and expert collaborations to tackle this issue as a society."