April 26, 2024

Audio and video manipulation on track

Audio and video manipulation on track

BSI theme page on deepfakes

Voices or even fake faces have always been very lifelike and realistic. The subject page of the Federal Bureau of Information Security (BSI) provides information on this topic. Photo: BSI/dpa-tmn

(Photo: dpa)

Real fake fakes made with the help of artificial intelligence Real spoofs are created with the help of artificial intelligence (AI) and are indistinguishable from real videos or audio recordings when you normally look or listen, warns the Federal Bureau of Information Security (BSI), which examines technology, processes, risks and countermeasures on a new file topic page to explain.

Born from neural networks

Fake products are also known as deep fakes because the methods used to create them are based on deep neural networks.

And how can you not fall for a deepfake? Just knowing their existence and the capabilities of artificial intelligence helps to stop trusting the authenticity of any videos or audio recordings per se. Instead, it is always important to critically question data and reasonableness.

Today’s Top Jobs

Find the best jobs now and
You are notified by e-mail.

Technically not always perfect

But there can also be technical clues that can detect fakes. According to BSI, these include distractions in facial transitions, blurred lines, limited facial expressions, or inconsistent exposure in videos. A metallic or monotonous voice, a wrong or abnormal manner of speaking, as well as abnormal noise or delay are typical errors in voice plagiarism.

See also  Ranking: The Olympic Games are less and less sustainable

However, in the future there could also be encryption methods that explicitly link the source of video or audio material to an identity, and BSI provides an overview of how to prevent deepfakes in the future. This also included the many methods of automated detection of manipulated data that scientists are working on.

BSI’s Deepfake theme page