It's a fascinating paradox of the tech space that specialized technologies built to solve real problems become better known for the potential threat they pose. For instance, the data analytics tools that marketers use to personalize and enhance consumer experiences are tainted by privacy concerns caused by companies that mishandle data. This is also true of deepfake technology.
On the one hand, the technology's potential to change audio and video makes it a boon for everyone from filmmakers, animation producers, and sound engineers; to game developers, marketers, and advertising agencies. But on the other, this same capability poses a threat as others could use it to deceive or misrepresent individuals.
While the story being told around the dangers of deepfake or synthetic media might be the more entertaining one, the true story is that this technology's potential could transform how we create digital media.
It's on those building the tech to make sure that the work is being done to prevent malicious activity.
Synthetic media – aka deepfake – is the technology that uses machine learning to create artificial video or audio that is indistinguishable from the real thing. In a video, this can be used to switch out one face for another or replace a speaker's voice with a totally different one.
In the simplest of terms, the machine learns the traits of the face or voice and transposes those elements onto the original subject that was recorded. In the entertainment industry, deepfake technology to do everything from adding deceased stars into modern films to de-aging actors, face swapping – and a range of other audio-visual effects.
But, if we're talking about something more altruistic, the potential for synthetic speech and video is endless in this respect too. For example, using speech-to-speech technology, David Beckham was able to synthesize his voice into nine different languages for his anti-malaria campaign.
To educate people and inspire support, Unicef's Deep Empathy project used deepfake technology to recreate the levels of destruction seen in Syria in major Western cities like London and Boston. The list goes on and on.
Used properly, voice conversion technology and video deepfakes can play a crucial role in many sectors. From education to entertainment, we're already seeing various projects dedicated to using the technology for good – such as resurrecting JFK to hear an undelivered speech or allowing visitors to the Illinois Holocaust Museum to interact with survivors' holograms – and to disrupt the way we do things.
In marketing and advertising alone, it represents an opportunity to vastly reduce the costs associated with bringing in a major talent to film or record, because you can use just a few snippets to craft your entire campaign
Unfortunately, as you likely know from reading the news or scrolling through your feed, there is a lot of negative press around deepfake technology. Bad actors have used it to create revenge porn, craft believable fake news, or as a form of blackmail.
There are two primary ways that synthetic speech and video are being used in unethical ways. First, criminals have adopted this technology as a way to deceive people. While that may sound all-encompassing, this approach includes any instance where synthetic media is used to mislead people and/or make them do something they wouldn't otherwise.
The second ethical threat posed by the misuse of deepfake technology is when people synthesize someone's voice or appearance without their permission. This can be particularly harmful when the recording is then used to compromise that person's reputation, income, or wellbeing.
These threats may seem overwhelming to an industry that's still establishing itself – with not much of a regulatory framework to speak of – but I believe that it's the responsibility of synthetic media developers, distributors, and early adopters to lay the groundwork for promoting ethical use. But what does that look like in practice?
Setting our sector up for success requires crafting clear principles to guide how we use the technology. Companies need to do everything they can to ensure that their technology is not used for deceptive purposes, and that can mean restricting who has access to their solution and how it's used.
This also includes being discerning about the projects they take on and the clients they work with. Companies also need to establish actionable parameters to avoid using anyone's voice or appearance without permission, such as actively engaging with all stakeholders and gathering written consent from the subject.
As we continue to build and refine deepfake technologies, our sector also carries the onus to develop (or advise on the development of) detection algorithms that can help identify synthetic media, even when it's mixed with other audio or video. It's also our responsibility to educate the public about this technology so that they can understand how it might be used against them.
With deepfake technology becoming more popular every day – and being published on personal and corporate accounts – social media and content platforms also have a role to play. Just as they review content for copyright infringement, they may also have to protect their users by reviewing posts for malicious deepfake manipulation.
Above all, it will be the role of regulators and policymakers to determine how this technology is used and monitored in the future. This is still very much blank space in discussions around synthetic media.
There is so much good we can do with deepfake technology. However, if we want to make the most of it, it's vital that the industry work together to mitigate the effect of its misuse.
Governments are already looking at it from a regulatory perspective. Given the success of GDPR in the European Union, the AI sector – which includes deepfakes – is already being regulated to some extent.
We're already seeing so much progress being made in this regard – and I'm excited to see what the future has in store.
Author
Grant Reaber, Chief Research Officer, Respeecher
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.