Artificial intelligence and cyber crime: Facing new threats and challenges

Cyber-criminals are upping their game, and getting more creative and technically-advanced in their approach of duping people. In their latest modus operandi, they are extorting money using coercive techniques like “digital arrest” or issuing threats of kidnapping and/or arresting their victim’s children, especially those studying abroad. These criminals have taken their game up a notch and […]

by Dr. S. Krishnan & Dr. Vandita Chahar - June 21, 2024, 6:03 am

Cyber-criminals are upping their game, and getting more creative and technically-advanced in their approach of duping people. In their latest modus operandi, they are extorting money using coercive techniques like “digital arrest” or issuing threats of kidnapping and/or arresting their victim’s children, especially those studying abroad. These criminals have taken their game up a notch and started using Artificial Intelligence software to imitate the voice of their victim’s children in the background during calls or send morphed picture showing their children tied up to convince them that they have their kin is in fact under their captivity.
Recently, a retired senior Navy officer was kept under digital arrest at his home by the cyber-criminals posing as senior officers of Mumbai police and he was duped of around Rs 68.49 lakhs.Later, police found that there was involvement of Dubai-based cyber-criminals in the case.
If one looks at the digital arrest modus operandi, under this type of fraud the cyber-criminals contact the victims using VoIP calls/WhatsApp calls/skype calls, mostly using international numbers. They pose themselves as senior police officers or investigation agency officials and threaten them that a parcel carrying drugs was booked in an international courier agency using their Aadhaar or PAN card, or any other identity document. They then threaten the victims falsely accusing them of money laundering. They keep the victims under digital arrests through online calls in their home. They threaten against sharing this information with anybody during the fraudulent arrest and coerce the victims into transferring funds into bank accounts given by them stating the accounts to be that of govt agency. They tell the victims that they would verify the money laundering allegations and if they are found true the money will be seized and if the allegations are false, their money will be returned. This coercion is so intense that some people even took loans to transfer the money.
Despite being a retired Commodore-level officer, the victim was kept under digital arrest at his home by the fraudsters for more than two days. The accused sent a fake RBI letter asking him to transfer the money into the bank account stating that his money will be kept as security deposit during investigations. The retired officer withdrew money from his fixed deposits and transferred it into the bank accounts of the fraudsters. Police found that the money was transferred into bank accounts taken on “rent” by the cyber-criminals in Kerala and Rajasthan. Money was withdrawn from these accounts and transferred into other bank accounts of Dubai-based criminals. Now police are contacting the Interpol through CBI to nab the Dubai-based accused.
Recent security breaches, such as generative AI-based phishing attacks in 2023 and the 2018 hack of Facebook’s user data, have brought the relationship between artificial intelligence and cyber crime back into mainstream conversations. In particular in light of the recent rapid growth of the use of AI across industries and even in our personal lives, it has become more important than ever for the general public to trust in an organization’s ability to properly manage its AI systems and secure consumer data against advanced criminal activity.
Digital ecosystems continue to grow and multiply at record levels as organizations and governments seek to provide remote access and services to meet consumer and workforce demand. However, this growth’s unintended side effect is an ever-expanding attack surface that, coupled with the availability of easily accessible and criminally weaponized generative AI tools, has increased the need for highly secure remote identity verification.

Cybercriminals are using advanced AI tools
Nowadays, bad actors are using advanced AI tools, such as convincing face swaps in tandem with emulators and other metadata manipulation methodologies (traditional cyberattack tools), to create new and widely unmapped threat vectors. For instance, Face swaps are created using generative AI tools and present a huge challenge to identity verification systems due to their ability to manipulate key traits of the image or videos. A face swap can easily be generated by off-the-shelf video face-swapping software and is harnessed by feeding the manipulated or synthetic output to a virtual camera. Unlike the human eye, advanced biometric systems can be made resilient to this type of attack.
However, in 2023, malicious actors exploited a loophole in some systems by using cyber tools, such as emulators, to conceal the existence of virtual cameras, making it harder for biometric solution providers to detect. This created the perfect storm with attackers making face swaps and emulators their preferred tools to perpetrate identity fraud.

How GenAI technology is redefining cybercrime
As GenAI continues to increase in its capabilities, so too does its potential for abuse by cybercriminals. AI cybercrime attacks are becoming more common; they are faster, more effective, and much harder to detect than traditional cyber-attacks. This is because GenAI algorithms enable criminals to conduct large-scale campaigns quickly and efficiently by automating tasks that would otherwise require manual effort. They can also assess the weaknesses of potential targets more accurately and develop strategies accordingly. These characteristics make GenAI cybercrime attacks particularly dangerous.
GenAI has been used to create more sophisticated malware and phishing emails. By using machine learning algorithms, attackers can generate highly effective phishing emails that are difficult to detect and even harder to defend against, as they will mimic the spoofed sender almost perfectly. These emails can then be sent out en masse, increasing the chances of someone falling victim to the attack.   GenAI Chatbots such as GPT-4 are one of these risks. The chatbots are fast-tracking knowledge mining in that the abilities of this AI can surpass the comprehension rate of humans. It can think and reframe information from many sources in a fast and persistent fashion – processing so fast that even if it fails, it can fail and learn faster than we can.

Deepfake Concern
One of the most concerning developments is the use of deepfake technology, a blend of machine learning and media manipulation that allows cybercriminals to create convincingly realistic synthetic media content. Criminals then use deepfakes to spread misinformation, perpetrate financial fraud, and tarnish reputations, exploiting the trust we place in digital media. In a recent 2024 incident reported by Hong Kong police, a company suffered a loss of $25 million due to the deception of an employee who fell victim to deepfake impersonations of his colleagues. The individual participated in a video call in which deepfake versions of the company’s United Kingdom-based CFO and other team members were present. According to authorities, scammers engineered these deepfakes using publicly accessible video content. AI algorithms, including machine learning and deep learning, enable systems to identify patterns and make predictions based on vast datasets. For example, PassGAN, an AI-driven password-cracking tool, harnesses machine learning algorithms that operate within a neural network framework. And the tool seems to work, as a study showcasing the effectiveness of PassGAN in password cracking, published by Home Security Heroes, found that 51% of passwords were cracked in less than a minute, 65% in less than an hour, 71% within a day, and 81% within a month.
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
A network of criminals launched the app only to promote easy loans and money doubling schemes to trap the victims. The involvement of shell companies, exchanges of crypto-currency and thereby, feeding the Chinese coffer were one of the national losses that the law enforcement agencies have addressed in this chain of cases. This is a serious threat that India needs to address sincerely in the future as well. The second human fault is addressed in two different cases. Honey trap and sextortion were other successful cases that they have solved. In fact, weakness of the individual played a primary role in such cases. Effects of technology are meaningless in absence of the lustful desires or companionship. Next to greed, cyber offenders preferred to cash in on this kind of human weakness.
To conclude, Artificial Intelligence technology is changing the game when it comes to cybercrime. While it’s making it easier for cybercriminals to launch attacks, it’s also being used to prevent them. As AI technology becomes more advanced, we can expect to see both more sophisticated cyber attacks and more powerful cybersecurity solutions. To stay ahead of the curve, organisations need to take cybersecurity seriously and invest in the latest AI-powered technologies. By doing so, they can protect themselves from the ever-evolving threat of cybercrime and stay ahead of cybercriminals.

Dr.S.Krishnan is an Associate Professor in Seedling School of Law and Governance, Jaipur National University, Jaipur.
Dr. Vandita Chahar is an Assistant Professor in Seedling School of Law and Governance, Jaipur National University, Jaipur.