How to safeguard your business against cyber risk and artificial intelligence threats

Artificial Intelligence (AI) is changing the way we live and work. There are many manifestations of this new technology that we, as consumers, use and interact with regularly. These include AI chatbots that help us with online processes and smart home devices that respond to our voices.

Gettyimages 1454927082
Mark Hendry
Published: 10 Apr 2024 Updated: 10 Apr 2024
Cyber security Fintech Financial fraud Technology

AI adoption gained momentum in early 2023 when Open AI released ChatGPT for general use. This heralded a new era of generative AI use and began a broader conversation about the emerging technology.

There’s much more to come. Businesses are using Generative AI for the creation of text, audio and visuals and Machine Learning (ML), which uses vast data sets to identify patterns beyond our human abilities.

The technology has already revolutionised industries ranging from health care (such as AI enabled diagnoses) to energy supply (for instance by facilitating the optimisation of green energy generation and storage to meet predicted demand) . Increased adoption of automation will ease the burden of people having to perform simple repetitive tasks as well as enabling the expansion of more sophisticated autonomous systems such as driverless vehicles.

What are the security risks and opportunities relating to AI?

There are many cyber-security implications relating to AI. Here, we’re focusing on two important cyber related AI risk areas, and one cyber related opportunity.

(For additional points of consideration and discussion, including your defence against AI-enabled cyber-attacks, harnessing the power of AI for cyber-defence, and ensuring that other AI tools are properly safeguarded, please contact an Evelyn Partners Advisor.)

1. Risk: Cyber-criminals are adopting AI, lowering the technical barrier of entry to cyber-crime and reducing the cost of finding and attacking victims. This can be done by:

  1. Improving the effectiveness of phishing emails by using generative AI to draft convincing phishing text
  2. Using automation to identify and exploit network vulnerabilities for initial access
  3. Using generative AI to write malicious code
  4. The proliferation of Ransomware as a Service (RaaS), whereby sophisticated threat groups lease attack instructions and infrastructure to less sophisticated groups (affiliates)

2. Risk: Organisations adopting AI need to consider how to protect themselves against cyber-risk and how to manage regulatory compliance issues such as those arising from the EU AI Act. The threat for AI users is constantly changing, with new actors and attack techniques targeting the sector. For instance, the Magecart group targets chatbots to install key loggers and steal data, including credit card data to conduct fraud.

3. Opportunity: Organisations can harness AI for improved cyber-defence and response. Regulations such as General Data Protection Regulation (GDPR) require organisations to adopt cutting-edge safeguards for systems and data. For instance:

  1. Adopting machine learning to inspect and alert on a far greater range and volume of systems data than has previously been possible. ML can identify and halt suspicious or malicious behaviour in computer networks and systems.
  2. Using AI enabled automation within investigation processes to speed up the collection of data and incorporating large-language model functionality into forensic tools to aid the responder’s speed and judgement.
  3. Using AI enabled automated tools to discover shadow IT and data, unusual storage locations and access permissions, and to undertake advanced testing, such as that required by regulations such as the European Union’s Digital Operational Resilience Act (DORA).

How can we protect business from cyber-attacks?

There are practical ways to safeguard against AI threats. For example, Evelyn Partners regularly help clients to:

  • Review their cyber-security to ensure that fundamental cyber-protection, detection and response capabilities are in place across people, processes and technology. Getting the fundamentals right is important if you want to defend against sophisticated AI threat actors.
  • Assist with the consideration, selection and implementation of AI enabled technologies for advanced risk identification, protection, detection and response. There are many tools available, but ensuring the right ones are selected and meet specific requirements, including addressing issues of integration, compatibility and coverage is key.
  • Advise and support clients with proportionate safeguarding of AI tools and integrations. This includes assisting with the design and implementation of appropriate security architectures and providing expertise to ensure that the planned benefits of AI can be achieved without taking on avoidable risks.

If you are concerned by any of these issues, please contact the Evelyn Partners Cyber Advisory Team or your usual Evelyn Partners contact.

Important information

By necessity, this briefing can only provide a short overview and it is essential to seek professional advice before applying the contents of this article. This briefing does not constitute advice nor a recommendation relating to the acquisition or disposal of investments. Details correct at time of writing.