In recent years, Artificial Intelligence (AI) has become an increasingly popular tool in the world of cyber crime. Cyber criminals can use the powerful capabilities of these technologies to exploit critical business and financial data. In this article, we explore the security risks that are associated with AI and how to combat them.
AI is on the rise and becoming increasingly popular across a range of industries. AI-powered feature Microsoft Copilot is due to be released later this year. We expect AI to revolutionise the way we live, work, and interact with technology. AI can be an incredibly useful tool when used appropriately but falling into the wrong hands could pose some serious security risks.
The use of large language models (LLMs), such as ChatGPT, makes it a lot easier for cyber criminals to write malicious code or generate highly convincing and difficult-to-detect phishing emails. In addition to ChatGPT, a range of other AI technologies exist that can be misused by cyber criminals. These sophisticated AI tools enable them to target vulnerabilities, automate attacks, and develop more sophisticated methods of breaching security systems.
AI technologies are being used by cyber criminals. This poses a serious threat to sensitive information and emphasises the need for strong cyber security measures.
AI technologies, including ChatGPT, have the potential to create and spread malware. Although ChatGPT can detect and reject requests to write malware code, as with many other requests that may seem harmful or criminal, cyber criminals can bypass these measures and easily get around it. By providing a detailed explanation of the steps to write the code instead of a direct request, ChatGPT will fail to identify this as a request to write malware and can effectively write it.
Cyber criminals with limited coding knowledge can use this AI platform to create generate sophisticated malware for stealing sensitive data or attacking computer systems. Being aware of threats and implementing the necessary steps to protect your organisation’s IT assets from malicious code is essential.
Generative AI technologies such as ChatGPT and Google Bard have the ability to create human-like content in a conversational way and can be used to generate content in a way that seems like it was written by a real person. Although this is an extremely powerful capability, it also opens the door for potential misuse, including criminal activities.
One of the most common signs of a phishing email is bad spelling and grammar mistakes. However, cyber criminals are now using AI technologies to create phishing emails that appear more convincing. By using AI, attackers can generate sophisticated emails that closely mimic genuine language, increasing the chances of obtaining sensitive contact and financial information.
With AI’s ability to create convincing imitations of text, it can also develop text-to-speech and clone voice.
Vishing is a type of phishing attack that is conducted over the telephone or VoIP systems instead of email. AI voice cloning technology like ElevenLabs can clone a person’s voice which involves analysing a voice and then generating new speech that sounds exactly like that person which can be used in a vishing attack.
The AI is trained to learn the characteristics of someone’s voice such as their pitch, tone and accent and then has the ability to create new audio that sounds exactly like that person talking.
Although there are genuine reasons for the use of this technology such as in customer service or TV and film, when misused it can also be used for malicious purposes where an attacker is wanting to impersonate someone in order to trick the victim into giving away sensitive information.
The password-cracking tool, PassGAN uses AI to compromise passwords in a matter of seconds. It uses Generative Adversarial Network (GAN) to learn from real leaked passwords and can figure out how they are created, without the need for humans to manually analyse them. While this makes password cracking faster and more efficient, it is a serious threat to your online security as cyber criminals can crack your passwords and gain access to your personal information.
A study published by Home Security Heroes shows that PassGAN can crack 51% of common passwords in less than a minute with 65% of common passwords being cracked within an hour, 71% within less than a day and 81% of common passwords in less than a month. PassGAN can also take less than 6 minutes to crack any kind of 7-character password, even if it contains symbols.
To significantly reduce the risk of your password being cracked, passwords that are more than 18 characters are generally safe against AI password hackers. According to Home Security Heroes, passwords that are 18 characters long takes PassGAN at least 10 months to crack number-only passwords and 6 quintillion years to crack passwords that contain symbols, numbers, lowercase and uppercase letters.
It is important to take preventive measures to protect yourself and your business. This can reduce the risk of being targeted by malicious activities. Below are several steps you can take: