Hackers use ChatGPT, too

Generative AI can be a blessing for cyber security. But it can become a curse if companies and staff are negligent

Not even a year and a half has passed since the release of ChatGPT. But generative AI, which offers enormous disruptive potential for many areas in companies, has spread like digital wildfire. A Bitcom study1 published just nine months later stated a “noticeable boost” for the German economy. 15 percent of the companies surveyed had already started to actually use generative AI. 68 percent consider AI to be the most important technology of the future. The area of cyber security is also of central importance, where the use of generative AI offers both opportunities for new use cases and requires careful examination of the risks. When integrating these technologies, high security standards must be observed in order to implement innovations securely and protect them efficiently against potential threats. Generative AI systems embody a revolutionary development for cyber security because they have the potential to redefine protection mechanisms and adapt them in a user-oriented manner. There are a whole range of areas in which generative AI can offer significant added cyber security value for companies. Some examples at a glance.

Phishing detection and defense

Phishing attacks are considered one of the most common cyber threats, as they primarily target the human factor. Generative AI models are able to critically scrutinize and recognize potential phishing emails thanks to their extensive understanding of language. This makes it possible to recognize even those ever-improving phishing methods that previously escaped filter mechanisms. Companies can thus ensure that their employees are better protected against such attacks, while at the same time minimizing the likelihood of data theft and other security-related incidents.

Threat intelligence

The ability to filter and interpret relevant information from a flood of security data is crucial to staying ahead of a cyber attack. Generative AI can act as a kind of “intelligent assistant” here and support security analysts in summarizing threat intelligence from various sources. Here, too, the AI's comprehensive understanding of language helps to evaluate unstructured data such as social media posts and news and integrate it into the threat intelligence.

Automation of the incident response

No company is immune to attacks, but responding quickly and effectively can significantly reduce the extent of the damage. AI can be used as an assistant to support human analysts, for example by processing and prioritizing many simultaneous security alerts. Thanks to its ability to understand and write source code, AI can help developers to close existing security gaps more quickly and efficiently through software patches.

A regular topic for supervisory and advisory boards

The potential of generative AI in cybersecurity means significant opportunities for companies to strengthen their security architecture. Supervisory boards must already regularly assess the appropriateness and functionality of risk management and the internal control system. Risks can be mitigated and competitive advantages achieved through appropriate investments and the proper use of AI technologies. This is why the topic is regularly on the agenda of board meetings.

New risks in the area of cyber security

The cat-and-mouse game typical of cyber security also continues with AI technology. Attackers are also using the possibilities of generative AI to generate ever better phishing emails, for example, which appear linguistically almost perfect and at the same time no longer just cover the typical case of “Your parcel cannot be delivered - please click here!”, but are extremely creative in their approach. This makes the use of AI on the defense side unavoidable. Even the most sophisticated awareness training will no longer be able to prevent employees from falling for an almost perfect attack email. However, AI will be able to recognize the typical communication patterns of an individual and therefore also messages that do not fit the pattern.

The Samsung case: secret data made public

Back in April 2023, the Samsung company discovered that engineers had transferred secret data to ChatGPT in order to fix errors in the source code. This information was unintentionally leaked to the public. Many employees are unaware that in some situations, data entered into generic AI models is used for training purposes and processed outside the EU. This happens, for example, when employees use their private ChatGPT account to generate emails or texts for their professional activities. According to a study by CISCO, 62% of respondents have already entered information about internal processes into generative AI tools.2

Security-relevant data can be compromised in this way. An attacker can feed an AI model with specific inputs and draw conclusions about the original training data from the outputs. Nevertheless, it is essential for companies to give employees the opportunity to work with AI. This risk can be averted with an “in-house GPT approach”. It ensures that all data remains within the secure boundaries of the company network and is not used for external training purposes or processed outside the EU.

This makes it possible to give employees access to generative AI while avoiding risks. E.ON, for example, offers its 74,000 employees the “E.ON GPT” 3, which significantly simplifies complex research processes on energy industry issues, among other things.

A look into the future

As AI continues to develop, cyber security will become even more important than it already is. The more we integrate generative AI into our corporate processes, the more we will have to pay attention to the integrity and security of company data.

Both supervisory boards and management boards will have to pay more attention to trends and innovations in the field of AI and assess the associated opportunities and risks. Their responsibility will not only lie in monitoring and supporting the transformation through AI, but also in ensuring that their company's cyber security strategy keeps pace with technological developments.

Management will need to actively contribute to shaping a corporate culture that harmonizes the duality of innovation and security and makes the company both competitive and resilient to cyber threats.        

1 bitkom (2023): Deutsche Wirtschaft drückt bei Künstlicher Intelligenz aufs Tempo.
URL: https://www.bitkom.org/Presse/Presseinformation/Deutsche-Wirtschaft-drueckt-bei-Kuenstlicher-Intelligenz-aufs-Tempo#_
[Status 31.01.2024]
2 Cisco (2024): Cisco 2024 Data Privacy Benchmark Study.
URL: https://www.cisco.com/c/en/us/about/trust-center/data-privacy-benchmark-study.html [Status 31.01.2024]
3 E.ON (2023): Helferin mit Energieexpertise: E.ON schaltet eigene Generative Künstliche Intelligenz live.
URL: https://www.eon.com/de/ueber-uns/presse/pressemitteilungen/2023/eon-schaltet-eigene-generative-kuenstliche-intelligenz-live.html [Status 31.01.2024]