Early-bird Discount
expires in
Register Now

Blog

AI in Cybersecurity: Risks and Opportunities

Blog Post

AI in Cybersecurity: Risks and Opportunities

Alexei Balaganski
Sep 17, 2024

AI is often hailed as the ultimate tool for addressing cybersecurity challenges, but what happens when hype collides with reality? The meteoric rise of generative AI has captured the imagination of the public. From writing essays to producing art, AI can seemingly do anything. But can it really tackle the complex issues of cybersecurity effectively?

Let’s start with the elephant in the room: ChatGPT is not the pinnacle of artificial intelligence that many believe it to be. In fact, what we often mistake for the GenAI model’s competence is just its astonishing ability to instantly generate a response that sounds coherent and plausible, courtesy of billions of digital monkeys with typewriters.

Unfortunately, what these monkeys are still lacking is the honesty to admit that they don’t know something. Instead, they will happily generate pages of plausibly sounding nonsense (in the industry, this is politely referred to as “hallucinations”). To quote an article I read recently: “For decades, we were promised artificial intelligence. What we got instead is artificial mediocrity.”

Beyond the Hype: The Limits of Large Language Models in Cybersecurity

While ChatGPT may seem like an all-powerful assistant, it is not designed for or particularly good at many of the tasks necessary in cybersecurity. Large language models can write code, analyze texts, and even assist in decision-making, but their potential applications in a high-stakes field like cybersecurity must be approached with careful consideration.

Generative AI thrives on massive datasets. But in cybersecurity, those datasets often contain sensitive, confidential information that you would rather not share with an external model housed in a cloud data center. Add to that the huge computational overhead that these models require, and we are left with an unsustainable approach in the long term. Imagine the environmental costs: running LLMs with cutting-edge encryption, like fully homomorphic encryption, would take us closer to a climate catastrophe than Bitcoin mining ever did.

So, does this mean AI has no role in cybersecurity? Absolutely not. But we need to distinguish between what is hype and what is practical, scalable, and trustworthy.

Practical AI Use Cases in Cybersecurity: What Really Works

Long before ChatGPT was even a concept, machine learning (ML) techniques were already a staple in cybersecurity tools. From anomaly detection to behavioral analytics, AI-driven methods have long been applied to analyze large datasets and identify outliers that might signify a security breach.

The technology behind detecting anomalies, for instance, has been around for decades, well before the GenAI boom. It’s based on statistical methods that have been refined over the years. But here’s where things get tricky - detecting an anomaly is one thing, but determining whether that anomaly poses a real threat is quite another. With traditional methods, you may end up with a flood of anomalies, but with no real insight into which of them demand immediate action.

The most advanced AI/ML tools today do more than just identify anomalies. They correlate them with known attack vectors, connect them to a specific threat framework like MITRE ATT&CK®, and even provide detailed threat artifacts that can be used for further analysis. The real challenge is not in detection, but in correlation, for example, in figuring out which vulnerabilities are actually exploitable in your specific environment. All of this makes for a robust threat detection mechanism, but none of it requires the power of generative AI.

Behavioral Analytics: The Long Game in Cybersecurity

Another area where AI/ML shines is in behavioral analytics - tracking user and system behavior over extended periods to identify potential security risks. But again, this is not the domain of ChatGPT. Traditional ML methods are more than capable of profiling behaviors, identifying deviations from the norm, and flagging potential threats based on those deviations.

The challenge in behavioral analytics is not the technology itself – it is the data. To be effective, behavioral AI tools need access to large, diverse datasets. This is why the most effective solutions come from vendors who operate massive security clouds, collecting behavioral data from a wide range of users, systems, and geographies.

What’s key to understand here is that this method requires continuous learning over time. Unlike the hype around instant results from LLMs, behavioral analytics relies on consistent, long-term data collection to provide meaningful insights.

Threat Intelligence: Where an LLM Can Truly Make a Difference

Knowing your enemy is a major factor in any kind of warfare, not just in cybersecurity. However, in cybersecurity, this struggle is especially unfair – thousands if not millions of malicious actors are out there against us, and somehow, we must collect enough intelligence about them to understand their methods, techniques, and motives.

Unsurprisingly, the Threat Intelligence industry is growing rapidly - both cybersecurity vendors and customers are in constant need of every bit of information that can give them an advantage in defending against the next cyberattack. Unfortunately, a lot of this information is highly unstructured and difficult to quantify. Entire teams of security researchers spend their days trawling the dark web for bits of intelligence about malicious actors.

Natural language processing capabilities of LLMs can dramatically increase their productivity. These AI models can directly interpret textual data like threat reports, social media, and forum posts to assess emerging risks, correlate them with data from different sources, and thus provide up-to-date insights into global cyber threats.

Can AI Handle Automated Incident Response?

One of the most controversial promises of AI in cybersecurity is the potential for automated incident response. In theory, AI could detect a threat and neutralize it without human intervention. In practice, though, there’s a significant trust gap. Many companies remain wary of handing over control of their incident response processes to an AI, no matter how advanced. A poorly designed AI could do more harm than good: imagine it shutting down critical manufacturing systems because it misinterpreted a benign anomaly as a serious threat.

However, we are seeing a shift in attitudes. The explosion of ChatGPT’s popularity has made organizations more open to the idea of AI taking on more responsibility in their security operations. But it’s a gradual process. Many companies are opting for a phased approach, first using AI in a “dry run” mode, where it identifies threats but does not take action. Only after extensive testing do they move to a more automated setup.

But even with this cautious approach, the question remains: should we trust AI to make these decisions for us? In most cases, the answer is still no; at least, not without significant oversight from human operators.

Finding the Balance Between Technology, Risk, and Trust

AI undoubtedly has a role to play in the future of cybersecurity, but we need to keep our expectations grounded in reality. Generative AI is not the silver bullet that many make it out to be - it’s useful in specific contexts, but far from a game-changer in cybersecurity. Instead, we should focus on leveraging the right kind of AI for the right tasks.

As with any emerging technology, trust is earned, not given. In cybersecurity, where the stakes are high, it’s crucial to proceed with caution, ensuring that AI is used to complement human expertise rather than replace it. After all, AI may help us detect threats faster, but it’s human judgment that ultimately keeps our systems safe.

If you’re interested in learning more about AI applications from real human experts, you might consider attending the upcoming cyberevolution conference that will take place this December in Frankfurt, Germany. AI risks and opportunities will be one of the key topics discussed there.


KuppingerCole Analysts AG
Roles & Responsibilities at KuppingerCole As the KuppingerCole's CTO, Alexei is in charge for the company's IT needs and operations, as well as of R&D and strategic planning in the evolving technology space. He oversees the development and operations of KuppingerCole's internal IT projects that support all areas of the company's business. As Lead Analyst, Alexei covers a broad range of cybersecurity topics, focusing on such areas as data protection, application security, and security automation among others, publishing research papers, hosting webinars, and appearing at KuppingerCole's conferences. He also provides technical expertise for the company's advisory projects and occasionally supports cybersecurity vendors with their product and market strategies. Background & Education Alexei holds a Master's degree in applied mathematics and computer science, majoring in statistics and computational methods. He has worked in IT for over 25 years, in roles ranging from writing code himself to managing software development projects to designing security architectures. He's been covering cybersecurity market trends and technologies as an analyst since 2012. Areas of coverage Information protection and privacy-enhancing technologies Application security Web and API security Cloud infrastructure and workload security Security analytics and automation Zero Trust architectures AI/ML in cybersecurity and beyond
Almost Ready to Join the cyberevolution 2024?
Reach out to our team with any remaining questions
Get in touch