Cleaning up the gigantic landfill
Posted: Sat Dec 28, 2024 9:18 am
Earlier this year, sexually explicit images of Taylor Swift spread like wildfire on the social network X . These images, generated with the help of AI, highlighted the nefariously astonishing ease with which trendy technology can be used for spurious purposes on the internet. Taylor Swift “deepfakes” are by no means an isolated case. Over the course of the last year, we have seen multiple misdeeds committed on the Internet with the complicity of AI (the fake photos of former President Donald Trump being arrested or the “fake” images in which the former tenant of the White House receives the support of black voters , for example).
There is a desire in the media to put these types of images in the spotlight because the technology that makes them possible, generative AI , is still largely unknown to the general public and is also far from reaching its peak (which is why it will probably experience a quantum leap in the coming years).
However, the ultimate reason why "deepfakes" born from the womb of AI end up being relevant is because this type of content wanders freely through social networks (which are in some ways its particular playground).
Facebook, Instagram, TikTok, X, YouTube and Google determine the way billions of people experience the internet on a daily basis (often absolutely nefarious). And that is not changing with the arrival of generative AI. What is more, the responsibility of these platforms as “gatekeepers” only becomes more pronounced as it becomes easier to generate texts, images and videos of synthetic origin. To stop the “fake” content that is furiously distributed by 2.0 platforms, social networks must act as “curators” more than ever, emphasizes Nathaniel Lubin in an article for The Atlantic .
The Internet has been a real cesspool for years, but its stench has become more noticeable with the arrival of AI on the scene.
Online platforms are nothing more than marketplaces where the individual attention of the user is what is traded (and where money is made in abundance) . On these types of channels, users are, after all, exposed to more content than their inevitably short time allows them to contemplate. On Instagram, for example, Meta’s algorithms are responsible for selecting from countless pieces of content the posts that ultimately make their way into the user’s “feed.” With the emergence of generative AI, capable of producing content in torrents, the absolutely fierce competition for user attention is only getting more intense.
And if the content that is circulating on the network of networks multiplies exponentially, we are inevitably destined to see cases like Taylor Swift's "deepfakes" spread like mushrooms on the Internet. With AI at their side, content creators are in a position to produce content more quickly and more cheaply, but they must also contend with more competition to ensure that their content (whether synthetic or not) reaches the eyes of the user. And the media is faced with a similar situation. With the help of AI, the media may be able to speed up production processes and reduce costs, but that will not prevent the content they generate from enjoying much less space on the network of networks.
On TikTok and YouTube, the majority of views are gobbled up by a tiny percentage of videos. And generative AI will only widen the gap.
To address the (by no means trivial) problems facing the Internet, online platforms could change their algorithms so that they specifically favour content produced by real human beings. However, this may not be possible after all, because the big online platforms are already in the eye of the storm (in the United States at least) for assuming the power to decide which content deserves the user's attention and which does not. This conflicts with the concept of "free access" that social networks are supposed to promote (although the algorithms by which they are governed are anything but neutral).
The responses put forward by 2.0 platforms to similar problems in the past are not very encouraging either. Last year Elon Musk replaced Twitter's old verification system (now X) with a paid blue "check" available to all users willing to pay (and without any additional merit beyond their wallet). And the results were absolutely predictable. Twitter was filled overnight with identity theft.
Facebook is not doing much better in stopping users who, despite committing all kinds of misdeeds, are privileged by its algorithm. And similarly, TikTok gives more importance to the viral “engagement” of specific videos than to the (by no means immaculate) history of the accounts behind such videos.
that is the network of networks
Is it possible then to clean up the filthy dump that the Internet is becoming with the growing strength of AI?
First, Rubin says, social networks should tone down their (by all accounts unhealthy) obsession with engagement bolivia whatsapp lead when it comes to giving visibility to some content over others. Spam, low-quality content, and deepfakes would then lose prominence on 2.0 platforms. Social networks would do well to listen more closely to what their users have to say about certain content and take those ratings into account when ranking such content. And they should also rely on ratings from reputable outside creators (media outlets, for example) to reduce the influence of abusive users in their domains. Another way to keep spam and deepfakes on a tight leash is to impose more restrictions on new users who enter 2.0 platforms.
Social media platforms should also rely on public health tools that regularly examine how digital platforms negatively affect at-risk demographics such as teenagers and implement changes when the harms inflicted by such platforms are particularly noticeable. This would, however, require greater transparency in the design experiments launched by Facebook, TikTok and co.
Another measure that social networks can take to keep a tight rein on “deepfakes” is labeling content that is likely to have been generated by AI. However, although commendable, this measure (which Meta is already preparing to adopt) is doomed to fall by the wayside, since the more advances the major language models that make AI possible make, the more difficult it will be to differentiate real content from synthetic content. What’s more, the definition of what is “real” may change over time, as happened in the past with Photoshop. The omnipresence of this image editing tool has ended up causing photos that are clearly retouched to be considered real today.
What is clear is that in the future, social networks (at least some of them) will probably require their users to specify the validated provenance of their content so that it can be accessed on their domains. And this will only reinforce the role of "gatekeepers" of the large online platforms, even if it comes at the expense of taking power away from the end user, whose attention is ultimately at stake, concludes Rubin.
There is a desire in the media to put these types of images in the spotlight because the technology that makes them possible, generative AI , is still largely unknown to the general public and is also far from reaching its peak (which is why it will probably experience a quantum leap in the coming years).
However, the ultimate reason why "deepfakes" born from the womb of AI end up being relevant is because this type of content wanders freely through social networks (which are in some ways its particular playground).
Facebook, Instagram, TikTok, X, YouTube and Google determine the way billions of people experience the internet on a daily basis (often absolutely nefarious). And that is not changing with the arrival of generative AI. What is more, the responsibility of these platforms as “gatekeepers” only becomes more pronounced as it becomes easier to generate texts, images and videos of synthetic origin. To stop the “fake” content that is furiously distributed by 2.0 platforms, social networks must act as “curators” more than ever, emphasizes Nathaniel Lubin in an article for The Atlantic .
The Internet has been a real cesspool for years, but its stench has become more noticeable with the arrival of AI on the scene.
Online platforms are nothing more than marketplaces where the individual attention of the user is what is traded (and where money is made in abundance) . On these types of channels, users are, after all, exposed to more content than their inevitably short time allows them to contemplate. On Instagram, for example, Meta’s algorithms are responsible for selecting from countless pieces of content the posts that ultimately make their way into the user’s “feed.” With the emergence of generative AI, capable of producing content in torrents, the absolutely fierce competition for user attention is only getting more intense.
And if the content that is circulating on the network of networks multiplies exponentially, we are inevitably destined to see cases like Taylor Swift's "deepfakes" spread like mushrooms on the Internet. With AI at their side, content creators are in a position to produce content more quickly and more cheaply, but they must also contend with more competition to ensure that their content (whether synthetic or not) reaches the eyes of the user. And the media is faced with a similar situation. With the help of AI, the media may be able to speed up production processes and reduce costs, but that will not prevent the content they generate from enjoying much less space on the network of networks.
On TikTok and YouTube, the majority of views are gobbled up by a tiny percentage of videos. And generative AI will only widen the gap.
To address the (by no means trivial) problems facing the Internet, online platforms could change their algorithms so that they specifically favour content produced by real human beings. However, this may not be possible after all, because the big online platforms are already in the eye of the storm (in the United States at least) for assuming the power to decide which content deserves the user's attention and which does not. This conflicts with the concept of "free access" that social networks are supposed to promote (although the algorithms by which they are governed are anything but neutral).
The responses put forward by 2.0 platforms to similar problems in the past are not very encouraging either. Last year Elon Musk replaced Twitter's old verification system (now X) with a paid blue "check" available to all users willing to pay (and without any additional merit beyond their wallet). And the results were absolutely predictable. Twitter was filled overnight with identity theft.
Facebook is not doing much better in stopping users who, despite committing all kinds of misdeeds, are privileged by its algorithm. And similarly, TikTok gives more importance to the viral “engagement” of specific videos than to the (by no means immaculate) history of the accounts behind such videos.
that is the network of networks
Is it possible then to clean up the filthy dump that the Internet is becoming with the growing strength of AI?
First, Rubin says, social networks should tone down their (by all accounts unhealthy) obsession with engagement bolivia whatsapp lead when it comes to giving visibility to some content over others. Spam, low-quality content, and deepfakes would then lose prominence on 2.0 platforms. Social networks would do well to listen more closely to what their users have to say about certain content and take those ratings into account when ranking such content. And they should also rely on ratings from reputable outside creators (media outlets, for example) to reduce the influence of abusive users in their domains. Another way to keep spam and deepfakes on a tight leash is to impose more restrictions on new users who enter 2.0 platforms.
Social media platforms should also rely on public health tools that regularly examine how digital platforms negatively affect at-risk demographics such as teenagers and implement changes when the harms inflicted by such platforms are particularly noticeable. This would, however, require greater transparency in the design experiments launched by Facebook, TikTok and co.
Another measure that social networks can take to keep a tight rein on “deepfakes” is labeling content that is likely to have been generated by AI. However, although commendable, this measure (which Meta is already preparing to adopt) is doomed to fall by the wayside, since the more advances the major language models that make AI possible make, the more difficult it will be to differentiate real content from synthetic content. What’s more, the definition of what is “real” may change over time, as happened in the past with Photoshop. The omnipresence of this image editing tool has ended up causing photos that are clearly retouched to be considered real today.
What is clear is that in the future, social networks (at least some of them) will probably require their users to specify the validated provenance of their content so that it can be accessed on their domains. And this will only reinforce the role of "gatekeepers" of the large online platforms, even if it comes at the expense of taking power away from the end user, whose attention is ultimately at stake, concludes Rubin.