Early-bird Discount
expires in
Register Now

Blog

The Human Vector

Blog Post

The Human Vector

Jürgen Schulze
Nov 03, 2023

The side effects of (re)generative AI impacting cyber security

Professionally paranoids can't but look at ChatGPT and its siblings from a risk perspective. Well, at least initially. We tend to think in risk vectors, threat actors and alike. Leaving all innovative benefits of the technology aside, there are sensitive elements that require attention. This is how we currently deal with this kind of innovation: Impulse driven …

In a first "impulse", we try to fit what we see into the simple equation where risk equals probability times damage. As a result, we get nervous as, looking at ChatGPT and its siblings, we can do the math neither on probability nor on the potential damage that the use of this technology could bear for us. The deeper we drill, the more questions we get. Scientifically sound answers drip through the tickers only at a slow pace since many of those require research time prior to assessing the effects. And that does not happen over night.

In a second "impulse", we focus on the obvious: the elements, that we are familiar with and which resonate the most in public perception. I.e. those clickbaits which constantly make headlines. This can be the result of a first, publicly prominent traumatic experience in adopting a new technology. It can also be intelligent marketing where our expectations are repetitively framed in order to overshadow those risks, that are not within the "cozy" technology coloured framework of the cyber community.

In a third "impulse", we call for regulation and put the responsibility on the shoulders of the lawmakers as we're lacking all means of dealing with what we can't combat in the very moment. Interestingly enough, that typically does not lead to short term solutions but to a reputation disaster of the cyber community where the classic perception of us hindering innovation is taking over the public discourse. Regulation, on top, is perceived as evil. Even more than the targets of the regulatory efforts. Whoever they are.

In a fourth "impulse", and in absence of any progress in dealing with the tangible and intangible risks, we'll blame what will inevitably go wrong, to those who are putting this new technology to the practice and, in absence of protective measures, awareness and training, unavoidably hit wrong buttons on the way. I call that the "Lieschen Mueller-effect". The blame goes to the weakest link in the chain.

After all those impulses, we'll realize that we wasted time, efforts and money fighting the genie who is irreversibly out of the bottle rather than understanding the various risk factors, what they entail, their interdependence and which experts to consult, in dealing with those and, ultimately, mitigating their impact.

In our workshop, I'll share with you some of my findings of my book and invite you to jointly revisit a selection of those factors that detrimentally impact our cyber security infrastructure such as: Trust, Credibility, Identity, Authenticity, Context, Value, Bias, Control, Intellectual Property, Confidentiality, Convenience, Liability, Ethics, Integrity.

We'll go beyond the obvious and get a better understanding on how to consciously and responsibly put new technology to action. All of that while increasing the security awareness around the use of (re)gen AI.


Jürgen Schulze has been working in IT since 1984, from 2001 onward primarily in the field of information security. During this time, he has also been involved in the use of artificial language understanding (NLU). Among other things, he developed the concept for a method for identity-free targeting of online ads. Based on his concept of contextual classification of unstructured legacy data at rest, PwC Deutschland developed "Smart Information Inventory", a solution based on NLU (Natural Language Understanding), which identifies critical content in unstructured data for further processing in compliance with regulatory requirements and the law. He is the author of several technical books and is currently working on a narrative technical book that will answer the question of what will happen to our digital legacies when we can no longer take care of them. He is also working on "Human Centric Digital Governance“ with regard to cybersecurity. To this end, he published a "Point of View" with best-selling author Maike van den Boom at PwC Germany in 2021 titled: „Kann Cyber Sicherheit glücklich machen?” (Can Cyber Security Make You Happy?). His book „ChatGPT - Das perfekte Versprechen“ (ChatGPT - The Impeccable Vow) has been published on Apple Books and Amazon Kindle in German. Here, Jürgen addresses aspects of the use of chatbot technologies that have not been considered in this context, so far.
Almost Ready to Join the cyberevolution 2023?
Reach out to our team with any remaining questions
Get in touch