At first glance, we find what appears to be a photograph taken in the 1930s, showing the capture of the famous beast from Loch Hoganfield in Scotland. A type of large snake with a shark’s head with measurements that would surprise anyone. Hunters and some curious people gather around the fallen giant. The picture is amazing and undeniable. But of course it’s fake. It’s created by generative AI in a few minutes.
The year 2023 has been a real boost for all generative artificial intelligence technologies capable of producing content, whether in images, audio, text, and even video, with impressive speed and with a quality that today approaches unprecedented realism. In general, all technologies are born as neutral technologies, and it is their specific use that determines their benefit or harm… The case of the Hoganfield Monster represents a clear example of a worrying and, in the long term, fundamentally dangerous trend: AI can create fake datesIn just a few seconds and with alarming realism.
You will be interested too: New York is not ready for what is coming (no one is ready)
Images, photographs and videos, such as the false history of the Hoganfield Monster, circulate at the speed of light across social networks and websites and, in many cases, reach media outlets that show their authenticity, unfiltered and unverified. “These images spread false information and flood search engines.” He mentioned The Row Housea few days ago.
It’s not a simple problem. Generative artificial intelligence Lacks proper organization Which leads to rights issues (these neural networks are trained using materials created by other authors), and job losses (there are already media outlets that have fired their “human employees”). And replacing it with artificial intelligence) Above all, it generates confusion that can become very dangerous in some situations and conflicts.
“Fake news” has invaded today and is capable of changing the course of events in political elections, armed conflicts, and social and economic negotiations… However, the most serious and fundamental danger remains. The collapse of our confidence in what we see. The phrase “I saw it with my own eyes” no longer applies and this will have important consequences.
“The most pressing problem (with artificial intelligence) is not that it will take our jobs, nor that it will change warfare, but that it will destroy human trust. They will take us into a world where we won’t know how to tell truth from lies. You don’t know who to trust.” …
– Pablo Mallo (@pitiklinov) October 6, 2023
Daniel Dennett, one of the most prominent philosophers of our time, explains it clearly in one sentence: “Through artificial intelligence, we are creating mental viruses, large-scale memes that… They undermine civilization by destroying trustAnd destroy testimony and evidence. We won’t know what to trust“We are witnessing a new, worrying and accelerating step into the future of disinformation.
The panorama in which we live is very delicate. The world changes daily, and a slight push in one direction or another can lead to a problem that we did not anticipate yesterday. Shocking news, the video of the bombing, the image of slaughtered victims in a small town can influence, affect or even determine social, political and economic decisions.
It might interest you: “We are stupid, and there is much worse to come.”
In just minutes, anyone from their couch at home could produce one of these videos, one of those photographs indistinguishable from reality, and their creativity would spread like wildfire across the Internet, reach some politicians, be shown on television, And it will be on the front pages… After a few days, with luck, the correction will come, and other news will arrive, and they will say that the reason for everything was a lie, the reason it was created by artificial intelligence that falsified reality. But the correction never reaches everyone, and it does not spread as quickly as fake news. That image invented by the neural network will continue to spread, creating in the minds of a large part of society an unclear idea of \u200b\u200bthe truth.
The Hoganfield Monster is just a big shark with fishermen in the 1930s, but we could soon find AI-generated videos of prime ministers threatening missile launches, false statements from ministers announcing vote-changing economic measures days before the election, and photographs Serve as evidence to provoke an attack. The ultimate solution will be to not believe anything or anyone, which is as bad as believing everything…
You may also be interested | On video
Mexico will have the first generative AI laboratory in Latin America
References and more information:
Taylor MacNeilDaniel Dennett has been thinking about reasoning and artificial intelligence“Tuff now
“Social media evangelist. Student. Reader. Troublemaker. Typical introvert.”