Cyber threats and risks emanating from ChatGPT.

New Cybersecurity Risks and Threats emanating from ChatGPT.

Cyber threats and risks emanating from ChatGPT.

New Cybersecurity Risks and Threats emanating from ChatGPT.

As ChatGPT gains popularity by the day, a very common question is being asked regularly – ‘does ChatGPT pose any cyber security risks and threats?’ Cyber security experts describe ChatGPT as both a ‘blessing’ and a ‘curse’ at the same time. As with everything in life OpenAI’s chatbot can be used by malicious actors for malicious requests.

The bot has been dumbed down to refuse certain evil requests but some users have discovered workarounds.

Black hat hackers are using ChatGPT to produce software code and scripts albeit the restrictions of the AI bot. In addition, malicious content such as email body text is being generated to be utilised in phishing scams, distributed via emails. Much better phishing emails are being crafted using such AI tools to trick receivers into clicking dubious links or downloading malware and ransomware.

In general cyber security professionals agree that deepfake audio, video, images, synthetic identity and AI-generated malware will soon be the biggest cyber threats on the Internet. This, in addition to issues related to copyright and intellectual property. Authenticating the original creator will be extremely difficult, going forward.

ChatGPT is used to create polymorphic malware – that is, malicious software programs that can alter its own code in order to bypass (malware) detection, and therefore making avoidance and removal much more difficult and complex. ChatGPT has the ability to create different code each time (through the re-generate function) easing the creation of polymorphic malware which is evasive to detectors.

In addition, considering it is an open source software, all the security concerns emanating from open source API come into play.

On the other hand, ChatGPT has a lot of potential to enhance threat detection and help in improving response actions. Although it is not specifically designed as a cybersecurity tool, it can help in Identifying patterns and anomalies in network traffic and analysing data and user behaviour, which are useful for defence. As a powerful natural language processing tool, ChatGPT can be both a protection enabler and a risk to cybersecurity. The ability to impersonate others and create flawless text can be utilised to develop malware and ransomware, create convincing phishing email messages, impersonate individuals without their consent, and spread misinformation.

AI-generated Phishing Scams

For hackers ChatGPT was indeed a game changer, since it is flawless in generating content which is free from spelling and grammatical errors. It is as if a ‘real person’ is on the other side executing prompts and commands. The ability to converse is seamless without verb tense mistakes. Traditionally phishing emails used to have poor grammar and awkward phrasing and flow of messages (even more when originating from territories where English is not the main language). ChatGPT bolstered these hackers’ level of english-based email messages. ChatGPT detectors and regular training (and re-training) of employees is crucial to build cyber security awareness and prevention skills to detect and avoid falling victim to phishing scams.

Duping ChatGPT into Malware/Ransomware Code

ChatGPT is very proficient at generating code and is created in such a way that malicious code generation requests are rejected. It is an ethical NLP bot that will “assist with useful and ethical tasks while adhering to ethical guidelines and policies.”

Malicious actors can be creative enough to trick the AI into generating hacking code that will be eventually used in real-life attacks. Fora in the dark web have a number of resources related to ChatGPT and how it can be duped to generate malicious code in the shape of malware and ransomware.

Chat GPT is very popular with 100 million users registered in just 60 days from launch. What is commonly referred to as ‘insisting and demanding’ during the prompt request, made it possible to create executable code. ChatGPT came in handy for bad actors to generate multiple iterations of the same code (intent) to create polymorphic malware code.

“From a quick scan of hackers’ forums it transpired that a good number of newbie hackers are using ChatGPT to create basic low-level code that is capable enough to be utilised in basic-level cyber attacks.” - Francesco Mifsud

    We are here to help

    francesco mifsud cybergate your cyber security partner
    Francesco Mifsud
    [email protected]

    I live and breathe cyber security and everything else in the discipline. With around a decade of experience in the industry I have had the opportunity to develop skills in penetration testing, cloud security, reverse engineering & exploit development, application security engineering, management and organisation-wide cyber security strategy. I hold a well-rounded set of security certifications such as OSCP, eWPTX and CISSP and have delivered training & workshops at some of the most prestigious hacking conferences such as DEF CON, BRU CON, BSides London and BSides Manchester.