top of page

Practical cyber security issues arising from the meteoric rise of business use of Generative AI

Crossword's Software Architect and AI Practitioner Jan Broniowski explores what the rise of LLMs means for businesses as they start to explore the cybersecurity implications.


GenAI, particularly LLMs, stand out as one of the most remarkable and transformative technologies I have witnessed in IT. This perspective is widely shared within the business, leading to the extensive and rapid adoption of GenAI. Yet, with this technological leap comes a unique set of risks and challenges. So, let’s explore the GenAI impact on the cybersecurity landscape.


LLMs in IT Products


To grasp the profound impact of LLMs on cybersecurity, let’s go back in time. SQL databases were invented in the early 1970s, and to this day are the most common way to store and interact with data in IT systems. These databases provide a well-defined structure, predictable behaviour, and embedded security features. Despite these attributes, it has taken decades to establish secure best practices, refine testing suites, and mitigate risks associated with attacks such as SQL Injection. Presently, LLMs are gaining traction and frequently hold a role in product design as pivotal as that of SQL databases. Given their infancy, inherent vulnerabilities, and unpredictable behaviour, LLMs pose a substantial cybersecurity challenge.


However, one can question making comparisons between SQL databases and LLM systems - given their stark differences in technology, purpose, and the half-century that separates their inception  -  it is this comparison that offers perspective into the magnitude of the challenge we are facing.

SQL

LLMs

In production for ~ 50 years

In production for ~ 1 year

Precisely defined language (Structured Query Language)

Uses human, ambiguous language

Deterministic behaviour

Non-deterministic behaviour by design

Knowledge is stored in defined, easily controlled structures

Knowledge is represented by learned weights in matrices and neural networks

Data sources are known, sources are stored in accessible form in SQL DB

Data sources can be unknown, impossible to learn about the sources from LLM

Data permissions (RBAC) model is available as a core concept

No data-level permission model exists

Attack detection techniques for SQL DBs are well known

Attack detection techniques are not yet established

Vulnerabilities come from software defects

Vulnerabilities may come directly from LLM, as hallucinations and emergent properties

Pentesting tools well established

No tools, very early days

SQL DBs and LLMs are completely different tools, but comparisons can be useful from a cybersecurity perspective.


One of the firsts attempts in structuring LLM vulnerabilities was performed by OWASP foundation in their OWASP Top 10 for LLMs document. OWASP defines ten vulnerability categories, among which several stand out as particularly noteworthy:

  • Prompt Injection  - Crafty inputs leading to undetected manipulations. The impact ranges from data exposure to unauthorised actions, serving attackers goals.

  • Training Data Poisoning  - LLMs learn from diverse text but risk training data poisoning, leading to user misinformation. Over-reliance on AI is a concern.

  • Supply Chain  - LLM supply chains risk integrity due to vulnerabilities, leading to biases, security breaches, or system failures. Issues arise from pre-trained models, crowdsourced data, and plugin extensions.

  • Data Leakage  - Data leakage in LLMs can expose sensitive information or proprietary details, leading to privacy and security breaches. Proper data sanitisation, and clear terms of use are crucial for prevention.

  • Excessive Agency - When LLMs interface with other systems, unrestricted agency may lead to undesirable operations and actions. Like web-apps, LLMs should not self-police; controls must be embedded in APIs.

  • Insecure Plugins  - Plugins connecting LLMs to external resources can be exploited if they accept free-form text inputs, enabling malicious requests that could lead to undesired behaviours or remote code execution.

Integrating LLMs into software products is not without its risks and unresolved issues. The adaptation of the OWASP Top 10 is a step towards addressing some of these vulnerabilities. However, the inherent nature of LLMs, their unpredictability, hallucinations, training data, and the lack of a clear permission model call for ongoing diligence and innovation in securing integrations.


Broader cybersecurity implications


As we expand the discussion beyond the integration of LLMs in products, the broader impact of GenAI on the cybersecurity landscape becomes apparent. Capabilities of generating images (Midjourney, Dall-E), and voice (Eleven Labs), combined with LLM Agents ability to plan actions and strategise, open the door to sophisticated attacks that were not possible before.


In the near future, we have to expect the emergence of new wave of phishing and scamming scenarios. Scenarios in which the attacker orchestrates an army of LLM Agents that can mimic another human, react dynamically to victim actions, replicate voice or even prepare images or videos (Agent with similar capabilities demonstrated by JARVIS from Microsoft).

LLMs excel at generating and rewriting code, a capability that extends beyond aiding developers. In fact, this proficiency enables an attacker generating of hundreds of variants of the same malware, each with a unique source code. Consequently, traditional detection techniques such as signature-based antivirus software and rule-based security are facing obsolescence and will have to adapt to these new threats.


An uncomfortable realisation is that LLMs effectively empower any individual to act maliciously if they choose to. An individual might:

  • find ways to bypass security of commercially available models (chatGPT, Bard, Cohere) and use them maliciously

  • bypass security of open source models (LLaMA, Claude, Vicuna) and use them in any way including fine-tuning for specific malicious use

  • use any of the “dark” models created by hacking and pentesting communities (darkBERT, pentestGPT, wormGPT, Evil-GPT)

The potential for every individual to exploit this technology for malicious purposes, coupled with the rise of autonomous agents scraping the web for targets intensifies the cyber threat landscape significantly.


Adapting to the new reality


There is a silver lining, albeit a controversial one, that LLMs could provide comparable benefits to security teams to that of malicious actors.

  • Cryptography still works. Our public-key cryptography is valid, we still have secure connections, secure data at rest, we can still use cryptographic signatures to verify nonrepudiation and source. Foundations are not changing.

  • Automation and lack of resources was always hunting security teams. Teams can leverage GenAI for automation and continuous testing. Microsoft already released Security Copilot and we expect more tools boosting SecOps productivity.

  • Malicious actors, not bounded by legislations, share knowledge and data among themselves efficiently. LLMs are great aggregators of knowledge and can level the field for security teams.

With the release of open source models, the genie is definitely out of the bottle. The transformation brought about by GenAI is both a challenge and an opportunity. Adaptation, vigilance, and forward-thinking are essential as we embrace and shape this new reality.




1 Comment


The rise of Generative AI in business brings incredible opportunities but also significant cybersecurity challenges. Protecting sensitive data and ensuring system integrity are more critical than ever. Using tools like this can help organizations keep track of their digital assets, ensuring that all devices and systems are secure and up-to-date, minimizing vulnerabilities in this evolving landscape.

Like
bottom of page