Advertisement

Advertisement

Deepfakes: New Path to Biometric Hacking?

By Deborah L. O’Mara | Jun 15, 2019
Hacker Image by Pete Linforth from Pixabay
Web Exclusive Content

Advertisement

Advertisement

Advertisement

Advertisement

Deepfakes are the latest news in video altering. The technique generates a video of person from a single profile photo based on artificial intelligence (AI) and deep learning algorithms using Python programming language. It’s a danger in many respects, and according to Hany Farid, a Dartmouth researcher who specializes in media forensics to root out deepfakes, the technique “creates risks of misinformation, election tampering and fraud.”

Deepfakes came under public scrutiny when an altered video of U.S. Speaker of the House Nancy Pelosi appearing to slur her words and to be under the influence went viral, bringing into focus this new emerging type of mass deception that will only spread with AI sophistication, becoming increasingly harder to debunk. 

Software development for creating deepfakes formerly required large amounts of image data sets to create their counterparts. Recently Samsung developed an AI system that can generate a phony clip with one photo, and the fear is that this breakthrough research will enable bad actors to develop their own tools to replicate the technology. In June and with growing concern over political campaign ramifications and government security, the Pentagon, through the Defense Advanced Research Projects Agency, announced a collaboration with several of the country’s biggest research institutions to get ahead of the deepfakes phenomena.

Security landscape and deepfakes

In video surveillance applications, AI software used to create a deepfake would likely be a separate device external to the camera. Even though today’s surveillance cameras have become powerful mini computers with higher processing power and memory, they would probably not be an effective platform for deepfake activity. It would most likely be executed in another device or platform, perhaps using some of the video generated originally by a camera.

The real threat vector from deepfakes, especially as identity and access management converges with physical security, could be the biometric algorithms in facial recognition.

With IP-based infrastructures encompassing physical security, IT and operational technology (OT) within a connected environment, new risks from these open landscapes continue to emerge. Adding to the quandary is the use of smartphones for biometrics and multifactor authentication. Now, when compromised, smartphones can provide access to the protected premises as well as computer, building and automation systems.

Pierre Bourgeix, president of ESI Convergent, a management consulting firm focused on physical and cybersecurity in Cleveland, Ohio, said convergence of physical security with logical or IT processes and OT, coupled with the ongoing development of new algorithms, could result in a surge in biometrics hacking if safeguards are not designed initially in devices used for security, like facial recognition access control. The combination of putting credentials on smartphones—combining physical and logical security—then becomes a “slippery slope,” according to Bourgeix.

“Today, a person can go from street to secure office without ever touching a key or card," he said. "Using an app on their phone or facial recognition technology, they enter the building through a security revolving door or turnstile. An integration with the elevator system automatically brings a car to the lobby floor, opens the doors and whisks the employee to their floor. Once at their desk, if they have authority as a facility manager, they can log in to the building’s HVAC system, make adjustments and read logs directly from their laptop or computer.”

When a device or smartphone gets hacked, he said, compromised biometrics credentials become the key.

“If I capture the sequence that allowed me to get the key [biometrics], I can get it again," he said. "If we can replicate that key [facial biometrics] that’s being used on a mobile phone to unlock and get physical access, then malicious activity branches into IT and OT. Once we replicate a person, how many times can we create a new identity with a biometric device?”

In the case of deepfakes, it’s not the picture itself, but the biometric algorithm that required purpose-designed safeguards.

“Biometrics need to be safe to use and executed in ways that can’t be hacked," Bourgeix said. "We can use encryption, mask the replication keys and other methods. But we need to build algorithms with security in mind—ones that are hardened and protected, in a container that can’t be accessed or compromised.”

Today’s IP-enabled and connected world has created new vulnerabilities and threats that are emerging and magnifying with software, AI and deep learning development. Deepfakes and their potential effect on physical and logical access control as well as systems and operations are the latest potential threat to connected technologies.

About The Author

O’MARA writes about security, life safety and systems integration and is managing director of DLO Communications. She can be reached at [email protected] or 773.414.3573.

Advertisement

Advertisement

Advertisement

Advertisement

featured Video

;

Advantages of Advertising with ELECTRICAL CONTRACTOR in 2025

Learn about the benefits of advertising with Electrical Contractor Media Group in 2025. 

Advertisement

Related Articles

Advertisement