In the future, A.I. systems that learn could be increasingly used to create and distribute deceptively real images for fake profiles, ransomware campaigns and fake news. The quality of the results is continuously increasing and people are facing new challenges to recognize them.
Decloaking fake profiles with the reverse image search doesn’t work for long anymore.
While a profile equipped with stock photos on Facebook or Instagram is exposed relatively quickly via a reverse image search, it is now possible to automatically generate images that are not based on real people or faces.
Even texts can be created more and more easily with little effort and yet convincingly. We want to know what this means for the future.
Profile images for fake profiles and hard to recognize fake news by mouse click?
Based on a GAN developed by Nvidia (Generative Adversarial Network), which was developed to model faces for e.g. video games, anyone can create deceptively real portraits of people who do not exist at thispersondoesnotexist.com.
The page generates a new picture with each update of the website, which looks like a real photo at first sight. A closer look reveals some artefacts that point to a fake picture. Especially when two people are pictured, jewellery or backgrounds seem strange, these are indications that the profile photo is not real. However, if you update the page a few times, you will find a suitable, sufficiently convincing photo for every need. False pictures of children are also generated.
Even the automated creation of (almost) human-looking texts is no longer a science fiction fantasy. Researchers from openai.com (https://blog.openai.com/better-language-models/) have developed a “language model” which creates a whole text from a small paragraph that looks deceptively similar to a novel, blog post or online article written by a human being. Here, too, there are a few subtleties that let the attentive reader recognize that these are artificially created texts.
What awaits us in the area of fake profiles and fake news?
These technologies are still in their infancy and have not yet reached their potential. However, counterfeits can already be created that are convincing at first glance and can only be recognized as counterfeits at second glance. The further development of these systems will probably eliminate these problems.
It will therefore become increasingly difficult to uncover fake profiles as such in the future. Especially the automation of activities carried out by human hands makes it interesting for motivated people to use them for their own purposes.
Thus the danger, which emanates in the Internet from such Fakeprofilen and Fakenews, could become in the future still larger than so far. The ability to uncover false profiles and news as such will become more difficult. Therefore, everyone should get to know checking mechanisms and think about them regularly. Wrong friends on Facebook, Xing and LinkedIn can lead to problems.
Translated with www.DeepL.com/Translator
Sevencast – der IT-Security Podcast
Von unterwegs, im Büro oder zu Hause hören und auf dem aktuellen Stand bleiben!