GenAI makes hacking and scamming child’s play!
GenAI or LLMs or conversational assistants (as we know them and love them) are making their home in our world already. At a mind-boggling pace of change, every week there is a new app, a new research paper, a new innovation being developed. It is gaining widespread popularity across public domains, be it in the form of specialized assistants to answer frequently asked questions, doing speech recognition, creating multiple language translations, text to speech functionality, content and collateral creation, video and image to text conversion, and more. Whether it is the ability to write emails or computer code, we can expect to have daily interaction with GenAI.
This pace of growth should make us apprehensive about data leakage, its effect on data privacy and data compliance. Data is paramount to any AI initiative and without valid data, these AI models are just good-looking toys. It has become time critical for us individuals, the businesses we work with and our governments to collaborate and secure our personal information. Even as the race is on to develop suitable AI regulations, what becomes the most important issue with the advent of GenAI in each aspect of our lives – classrooms, hospitals, online shopping, insurance, customer experience? Data protection and data loss prevention of course!
Some unexpected instances have already happened due to lack of structure, guidance, and governance. There was the accidental sensitive data leak via ChatGPT by company employees, which included the source code of their prize software. For them, ChatGPT’s usefulness likely overrode security concerns. Such incidents are leading companies to try to get the chatbot shut down, and/ or figure out a stringent approval process. But an iron fist approach may never really work, because we will always look for a shorter path to keep going forward and appear efficient. These security concerns are real and this is the beginning of Shadow AI in businesses. Building out security frameworks, procedures and policies around GenAI is urgent and crucial.
Apart from proper access controls, there is an equal need to educate the users. Turns out that LLMs such as ChatGPT, Gemini, Claude, and Clyde are very hackable! Already, we see enough loopholes for cyber criminals using prompt engineering to poison the models. Take a look at some examples of entry points for hackers, and raise awareness about the best practices we need to adopt:
90% of malicious attempts by criminals start with email (BEC and phishing). With GenAI, it is easy to create AI generated emails and AI generated QR codes posing as lucrative offers. LLMs allow accurate translation, no more spelling mistakes, bad grammar, a hacked appearance – which served as giveaways in the past. Criminals can have a very elegant appearance with legit emails, e-commerce websites, interactive apps and inviting bargains. We need to be far more vigilant because cyber criminals are much better equipped now.
Simple questions asked by a good hacker to a customer service bot on a website can reveal internal workings of the model and the CRM data. This can lead to a spill of confidential customer information and PII/ PCI data. A prompt injection can further hijack the model’s output for malicious purposes – tricking the LLM into generating harmful outputs. Data extraction is much easier than ever before, and we need better data loss prevention techniques
Recommended by LinkedIn
Chatbots (Gemini, ChatGPT) are subject to jailbreaking and can evade endpoint detection and response (EDR) – which means you can bypass the safety and moderation features to turn around to create social engineering attacks such as phishing and vishing (voice phishing). While LLM developers regularly update their rules to work against known techniques, the attackers always come up with novel approaches. This is especially true for multimodal LLMs which combine images, videos, audio and text. Deepfake videos and voice cloning is a piece of cake, so what you see/ hear is not what you get!
Let’s have the all-important conversations about governance, procedures, policies and security frameworks. And where is law enforcement in this?
They are mobilizing
They are knowledgeable
They are active
They care about victims and they care about sharing their knowledge!
Follow Operation Shamrock as it celebrates it’s one year anniversary next month