• HOME»
  • Opinion»
  • Artificial Intelligence and the issues of identity theft in today’s technological age

Artificial Intelligence and the issues of identity theft in today’s technological age

While identity theft is far from a new concept, perpetrators who steal and impersonate another person’s identity now have an expanded arsenal of tools at their disposal. Hence the creation and use of Deepfakes for deep and dark secrets. Just as businesses have enhanced operations with the expansion of the internet and the introduction of artificial intelligence (AI), identity theft […]

Advertisement
Artificial Intelligence and the issues of identity theft in today’s technological age

While identity theft is far from a new concept, perpetrators who steal and impersonate another person’s identity now have an expanded arsenal of tools at their disposal. Hence the creation and use of Deepfakes for deep and dark secrets. Just as businesses have enhanced operations with the expansion of the internet and the introduction of artificial intelligence (AI), identity theft attackers have leveraged AI in diversifying and advancing their efforts.
While most instances of deepfakes involve a degree of humor due to their unbelievability, hackers have used AI-based technology to create and use deepfakes for malicious purposes. By using AI to manipulate a person’s voice and appearance to fit their intentions, deepfakes provide a way for hackers to increase the plausible nature of their requests.
The first notable case of an AI-generated deepfake used in a large-scale identity theft occurred in March 2019. According to the Wall Street Journal, the attack’s perpetrator created an audio clip impersonating a German company’s CEO requesting a wire transfer to an offshore account. The attacker then called a UK employee and played the recording, persuading that employee to comply with the request and send $243,000 to the perpetrator’s account.

Deepfake and AI-assisted fraud tactics
With the rise of deepfakes and generative AI, fraudsters can create synthetic biometric data, including facial features, to deceive biometric systems. This can enable unauthorized access to devices, secure areas or sensitive information, compromising the integrity of biometric-based identity verification.
Deepfakes have the potential to undermine the integrity of personal relationships and professional interactions. Imagine receiving a video purportedly from a friend or colleague, only to discover that it is a meticulously crafted deepfake designed to deceive and manipulate. Such instances can erode trust and sow discord in both personal and professional spheres. Moreover, the financial consequences of deepfake-enabled identity theft can be devastating. From fraudulent financial transactions to extortion schemes and blackmail, the possibilities for malicious exploitation are virtually limitless. Victims may incur significant financial losses and endure lasting reputational damage as a result of deepfake-induced identity theft.

Tactics of Identity Theft / Deepfakes
Deepfakes tactics to circumvent biometric authentication include:
Manipulating facial recognition systems: Facial recognition systems are widely used for identity verification, but they can be vulnerable to deepfake attacks. Fraudsters can use AI-generated deepfake images or videos to trick facial recognition algorithms into recognizing them as legitimate individuals. This can allow them to gain unauthorized access to accounts, bypass security measures or even gain entry into secure premises.
Exploiting voice cloning technology: Voice cloning, another application of AI, allows fraudsters to imitate someone’s voice with remarkable accuracy. By combining deepfake technology with voice cloning, fraudsters can gain access to user accounts that are protected with voice authentication and use it to authorize fraudulent transactions and conduct other malicious activities.
Creating Authentic-Looking Identity Documents: One way fraudsters exploit AI and deepfakes is by creating counterfeit identity documents that appear genuine. With AI algorithms capable of generating highly realistic images, fraudsters can produce forged passports, driver’s licenses or other identification papers that pass visual inspections. These counterfeit documents can then be used to establish false identities and deceive identity verification systems.
Impersonating Individuals with Deepfake Videos: Deepfake videos, which involve replacing a person’s face with someone else’s using AI algorithms, provide fraudsters with a powerful tool for impersonation. By using deepfake technology, fraudsters can create videos in which they appear to be someone else, potentially targeting individuals’ personal or professional relationships. In addition to social engineering, this technique can be used for financial fraud or even blackmail.
Evading Fraud Detection Systems: Traditional fraud detection systems often rely on rule-based algorithms or pattern-recognition techniques. However, AI-powered fraudsters can employ deepfakes to evade these systems. By generating counterfeit data or manipulating patterns that AI models have learned from — a fraud technique known as adversarial attacks — fraudsters can trick algorithms into classifying fraudulent activities as legitimate. This poses challenges for fraud detection and increases the risk of undetected identity fraud.

Leveraging AI for cybersecurity
Most security solution providers are already using AI for cybersecurity. Artificial intelligence is used to automate repetitive tasks like data collection and analysis, system management, the accounting of attack surfaces, and vulnerability detection. Additionally, AI broadens situational awareness to enable better decision-making. Cybersecurity systems powered by AI can present context for the security information it displays along with response suggestions.
Notably, AI makes cybersecurity systems more effective in the following areas:
Detection of malicious activities – Artificial intelligence can analyze networks to establish benchmarks of safe or regular activities and spot instances that may be deemed anomalous or potentially harmful. AI is a key technology in solutions like user and entity behavior analysis (UEBA), which can detect threats continuously and in real time.
Malware detection – AI does not supplant threat intelligence or the identification of malware based on threat signatures. Instead, what it does is examine various factors such as file characteristics, code patterns, and behavior to determine if a file or script introduced to the system is safe or malicious.
Handling of zero-day attacks – With AI’s ability to detect malicious activities and malware, it allows cybersecurity systems to perform better in dealing with zero-days or threats that are still unknown.
Threat intelligence – AI is also useful in significantly improving threat intelligence as it can automatically gather security-related information from various sources, including the dark web. Cybersecurity solutions that integrate AI can identify emerging threats, correlate indicators of compromise, and present actionable insights.
Threat management – Another important benefit of artificial intelligence is its ability to ease the workload of human cybersecurity analysts. It helps address alert fatigue brought about by the deluge of security alerts and event information, which usually includes excessive amounts of false positives. AI can correlate data across multiple sources to accurately determine threats and prioritize the most urgent alerts, so they can be addressed in a timely manner.
Security analytics – AI can go through heaps of security logs and incident data to identify trends, detect malicious activities, and examine other metrics that may be missed if organizations solely rely on human security analysts.
Proactive threat hunting – AI can automate the process of finding vulnerabilities and potential threats. With machine learning algorithms, it is possible to continuously monitor network traffic, logs, and other data and apply cybersecurity rules and decisions to make sure that threats are found and resolved before they can cause any problem.

Way Ahead
AI and deepfakes have given fraudsters unprecedented tools to conduct identity fraud, and with the rise of generative AI that is accessible to consumers and easy to use, these attacks pose a bigger threat than ever. The increasing risks associated with these techniques are significant and require organizations, individuals and security professionals to remain vigilant and adapt their strategies accordingly.
Strengthening identity verification processes, educating users about the risks and employing advanced detection technologies are essential in combating the evolving threats of AI-powered identity fraud. By staying informed and proactive, we can strive to stay one step ahead of fraudsters and protect ourselves from these emerging risks.

Dr.S.Krishnan is an Associate Professor in Seedling School of Law and Governance, Jaipur National University, Jaipur.
Mr. Anuj Shah is an Assistant Director in the School of Education, Jaipur National University, Jaipur.

Advertisement