McAfee, a global security solutions firm, has introduced Project Mockingbird, an AI-powered technology designed to detect deepfake audio.
McAfee Labs has developed an advanced AI model using contextual, behavioral, and categorical detection models to identify AI-generated audio in videos. With a current accuracy rate of 90%, McAfee aims to protect against malicious ‘cheapfakes’ or deepfakes. Steve Grobman, CTO at McAfee, stated that they are still developing and testing the technology, with plans to integrate it into McAfee+ at no additional cost to end-users.
The technology’s applications include combating AI-generated scams and disinformation, providing consumers with the ability to distinguish between real and fake content. The first public demos of Project Mockingbird will be available at CES 2024.
“With McAfee’s latest AI detection capabilities, we will provide customers a tool that operates at more than 90% accuracy to help people understand their digital world and assess the likelihood of content being different than it seems,” Steve Grobman, Chief Technology Officer, McAfee, said.
Why Project Mockingbird
Mockingbirds are a group of birds primarily known for mimicking or “mocking” the songs of other birds. While there’s no proven reason as to why Mockingbirds mock, one theory behind the behavior is that female birds may prefer males who sing more songs, so the males mock to trick them. Similarly, cybercriminals leverage Generative AI to “mock” or clone the voices of celebrities, influencers and even loved ones in order to defraud consumers.
Deep Concerns about Deepfake Technology
Consumers are increasingly concerned about the sophisticated nature of these scams, as they no longer trust that their senses and experiences are enough to determine whether what they’re seeing or hearing is real or fake. Results from McAfee’s December 2023 Deepfakes Survey revealed the following:
Deepfake experiences and perspectives
- The vast majority (84%) of Americans are concerned about how deepfakes will be used in 2024.
- 68% of Americans are more concerned about deepfakes now than they were one year ago.
- Over a third (33%) of Americans said they (16%) or someone they know (17%) have seen or experienced a deepfake scam , this is most prominent for 18 – 34 year olds at 40%.
Top concerns for deepfake usage in 2024:
- More than half (52%) of Americans are concerned that the rise in deepfakes will influence elections, undermine public trust in the media (48%), and be used to impersonate public figures (49%).
- Worries around the proliferation of scams thanks to AI and deepfakes is also considerable at 57%.
- The use of deepfakes for cyberbullying is concerning for 44% of Americans, with more than a third (37%) of people also concerned about deepfakes being used to create sexually explicit content.
For over a decade, McAfee has used AI to safeguard millions of global customers from online privacy and identity threats. By running multiple models in parallel, McAfee can perform a comprehensive analysis of problems from multiple angles. For example, structural models are used to understand the threat types, behavior models to understand what that threat does, and contextual models to trace the origin of the data underpinning a particular threat. Utilizing multiple models concurrently allows McAfee to provide customers with the most effective information and recommendations and reinforces the company’s commitment to protecting people’s privacy, identity, and personal information.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.