GenAI makes hacking and scamming child’s play!
Chucky is here!

GenAI makes hacking and scamming child’s play!

GenAI or LLMs or conversational assistants (as we know them and love them) are making their home in our world already. At a mind-boggling pace of change, every week there is a new app, a new research paper, a new innovation being developed. It is gaining widespread popularity across public domains, be it in the form of specialized assistants to answer frequently asked questions, doing speech recognition, creating multiple language translations, text to speech functionality, content and collateral creation, video and image to text conversion, and more. Whether it is the ability to write emails or computer code, we can expect to have daily interaction with GenAI.

This pace of growth should make us apprehensive about data leakage, its effect on data privacy and data compliance. Data is paramount to any AI initiative and without valid data, these AI models are just good-looking toys. It has become time critical for us individuals, the businesses we work with and our governments to collaborate and secure our personal information. Even as the race is on to develop suitable AI regulations, what becomes the most important issue with the advent of GenAI in each aspect of our lives – classrooms, hospitals, online shopping, insurance, customer experience? Data protection and data loss prevention of course! 

Some unexpected instances have already happened due to lack of structure, guidance, and governance. There was the accidental sensitive data leak via ChatGPT by company employees, which included the source code of their prize software. For them, ChatGPT’s usefulness likely overrode security concerns. Such incidents are leading companies to try to get the chatbot shut down, and/ or figure out a stringent approval process. But an iron fist approach may never really work, because we will always look for a shorter path to keep going forward and appear efficient. These security concerns are real and this is the beginning of Shadow AI in businesses. Building out security frameworks, procedures and policies around GenAI is urgent and crucial.

Apart from proper access controls, there is an equal need to educate the users. Turns out that LLMs such as ChatGPT, Gemini, Claude, and Clyde are very hackable! Already, we see enough loopholes for cyber criminals using prompt engineering to poison the models. Take a look at some examples of entry points for hackers, and raise awareness about the best practices we need to adopt:

Article content
Phishing

90% of malicious attempts by criminals start with email (BEC and phishing). With GenAI, it is easy to create AI generated emails and AI generated QR codes posing as lucrative offers. LLMs allow accurate translation, no more spelling mistakes, bad grammar, a hacked appearance – which served as giveaways in the past. Criminals can have a very elegant appearance with legit emails, e-commerce websites, interactive apps and inviting bargains. We need to be far more vigilant because cyber criminals are much better equipped now.

Article content
Prompt injection

Simple questions asked by a good hacker to a customer service bot on a website can reveal internal workings of the model and the CRM data. This can lead to a spill of confidential customer information and PII/ PCI data. A prompt injection can further hijack the model’s output for malicious purposes – tricking the LLM into generating harmful outputs. Data extraction is much easier than ever before, and we need better data loss prevention techniques

Article content
Jailbreak

Chatbots (Gemini, ChatGPT) are subject to jailbreaking and can evade endpoint detection and response (EDR) – which means you can bypass the safety and moderation features to turn around to create social engineering attacks such as phishing and vishing (voice phishing). While LLM developers regularly update their rules to work against known techniques, the attackers always come up with novel approaches. This is especially true for multimodal LLMs which combine images, videos, audio and text. Deepfake videos and voice cloning is a piece of cake, so what you see/ hear is not what you get!

Let’s have the all-important conversations about governance, procedures, policies and security frameworks. And where is law enforcement in this?

They are mobilizing

They are knowledgeable

They are active

They care about victims and they care about sharing their knowledge!

Follow Operation Shamrock as it celebrates it’s one year anniversary next month

To view or add a comment, sign in

More articles by Kiran Khanna

  • Are we done playing defensive yet?

    Unsurprisingly, when we think of cybersecurity, most only think of defensive measures. Especially, at the edge and for…

    1 Comment
  • Making it mainstream #cybercrimedisruption

    Enough talk, it’s time for action! Pig butchering is the crime of our age. It causes massive damage to victims who…

    3 Comments
  • Disrupt Cybercrime

    Our highly digital, rapidly evolving society demands a more intentional, more coordinated, and better resourced…

    1 Comment
  • Hey Mom,Hey Dad!

    Let me share a story - it is about Ana, who has been converted into a digital version. Amazing what AI technology can…

    3 Comments
  • Stay Motivated about Trust&Safety

    Let’s lay it out bluntly – Cyber criminals are all that: masters of technology in the online environment running slick…

    1 Comment
  • Think like the attacker

    Cybercrime is not new, dating back three decades. As the world gets increasingly digital, financial cybercrime has…

    5 Comments
  • The Security Mindset

    It is Cyber Awareness Month and this year there is an increased focus on users as the attack surface. With the bad…

  • Partner Up ! - your key is strategic alliances

    I come from the disciplined world of product marketing having gained experience in messaging, differentiated…

    1 Comment
  • Take a bow, Cloud Infrastructure and Services

    What exciting times! For years now, cloud infrastructure and services have been table-stakes. Nobody really paid…

    1 Comment

Insights from the community

Others also viewed

Explore topics