April 21, 2023
3
minutes

Managing AI Security Threats

Artificial Intelligence (AI) has grown significantly over the past few years, revolutionizing various industries and reshaping the way businesses operate. From generative AI breakthroughs to the rise of autonomous AI agents, these advancements have enabled more sophisticated applications and opened new avenues for innovation. Generative AI models, such as GPT-4, have advanced significantly, producing more realistic text, images, and audio. Newer models like OpenAI's Gemini Pro have outperformed their predecessors in various benchmarks, showcasing superior capabilities in reasoning, coding, and understanding complex queries.

Leading organizations leverage such advancements in generative AI for product development, risk management, and operational optimization. The technology is increasingly used to create new revenue streams rather than just reducing costs, demonstrating its growing impact on business innovation and efficiency. The development of autonomous AI agents that can perform tasks independently represents a significant shift toward more advanced AI applications. These agents can handle complex tasks such as online shopping and research assistance.

Progress in AI, ML, and LLMs

Significant progress has been made in the development of large language models (LLMs) and other AI technologies. These models, trained on massive datasets of text and code, can perform a variety of tasks including natural language processing, machine translation, and code generation. This progress highlights the growing potential of AI to tackle increasingly complex challenges and deliver innovative solutions across various sectors.

Some of the key players involved in the development of LLMs and other AI technologies include:

Potential challenges for cybersecurity organizations

The rapid advancements in AI technology have not only brought about significant benefits but also introduced a new array of cybersecurity threats. As AI capabilities expand, so does the scope of potential challenges that cybersecurity organizations must address. This section delves into the emerging threats posed by AI, providing case studies and examples to illustrate the real-world impact on cybersecurity.

AI advancements have created new vulnerabilities that cybercriminals are quick to exploit. The sophistication and capabilities of AI-driven threats are evolving, necessitating more robust and adaptive security measures.

Enhanced Phishing Attacks

According to a recent report by cybersecurity firm SlashNext, the number of phishing email attacks has increased by 856% over the last year. Large Language Models (LLMs) like GPT-4 have improved the sophistication of phishing emails, making them more persuasive and harder to detect. These models can generate highly convincing text, which is increasingly used in spear-phishing campaigns and Business Email Compromise (BEC) attacks.

Today, the use of LLMs in phishing attacks presents a significant challenge for cybersecurity. The ability of AI to produce highly personalized and realistic messages makes these phishing attempts more successful at bypassing traditional security measures and deceiving recipients.

Deepfakes and Voice-fakes

The technology for creating deepfakes and voice-fakes has advanced, leading to more realistic and dangerous applications. These are used in fraud, disinformation campaigns, and even bypassing voice authentication systems in financial institutions.

Today, deepfakes and voice-fakes are becoming more prevalent, posing significant challenges to cybersecurity. These technologies are increasingly used for fraudulent activities, spreading disinformation, and compromising voice-based security systems. For instance, Kaushik Hatti, the Chief Information Security Officer (CISO) at Pinochle.AI, elaborated on a recent incident involving cybercriminals using deepfake technology to impersonate a company’s executive in a video call, resulting in a major financial loss. This highlights the need for enhanced verification processes and security measures to detect and counteract these sophisticated threats.

Social Engineering with Chatbots

The use of AI chatbots in social engineering attacks is increasing, presenting a significant challenge to cybersecurity. These chatbots can interact with victims in a manner that closely mimics human behavior, making it difficult to identify the threat.

AI is significantly enhancing the capabilities of threat actors in social engineering. Generative AI (GenAI) can already create highly convincing interactions with victims, including crafting lure documents free from the translation, spelling, and grammatical errors that typically reveal phishing attempts. This capability is expected to increase over the next two years as AI models evolve and their adoption grows

While AI advancements are beneficial, they pose challenges for cybersecurity. Enhanced phishing, deepfakes, AI-powered malware, and chatbot social engineering are examples of AI being weaponized. Cybersecurity must advance to counter these threats.

Recommendations for cybersecurity organizations

To address the challenges posed by LLMs and other AI technologies, cybersecurity organizations should:

  • Invest in research and development of new security technologies that can defend against these threats.
  • Partner with other organizations, such as academic institutions and government agencies, to share information and best practices.
  • Develop and implement training programs for security professionals to help them stay up-to-date on the latest threats.
  • By taking these steps, cybersecurity organizations can help to ensure that they are prepared to defend against the evolving threat landscape.
  • Organizations must implement advanced email filtering systems and educate employees on recognizing phishing attempts to mitigate this growing threat effectively.
  • Organizations need to adopt advanced verification techniques, robust security protocols, and comprehensive security strategies to combat the threats posed by deepfakes and voice fakes.
  • To combat chatbot-driven social engineering attacks, organizations should enhance security protocols and employee training, and implement advanced monitoring and verification processes.

Updating Risk Assessments For The World of AI & LLMs

As AI and Large Language Models (LLMs) evolve, cybersecurity organizations must continuously update their risk assessments to address new challenges and threats. Recent regulatory changes are guiding these assessments, ensuring that AI development and deployment remain safe, secure, and trustworthy.

These regulations not only aim to mitigate risks associated with AI but also provide a framework for organizations to enhance their Governance, Risk Management, and Compliance (GRC) practices. Organizations can better protect their data, maintain customer trust, and avoid potential legal repercussions by staying compliant with these evolving regulations.

Updated [July 26, 2024]:

Key considerations for regulatory updates since 2023

United States

  • Executive Order on Safe AI (October 2023): Sets a national standard for AI safety, mandating stricter governance to prevent misuse and promote AI security.
  • NIST AI Risk Management Framework (January 2023): Provides guidelines emphasizing trustworthiness and accountability to enhance AI reliability and safety, to dive deeper into this framework read our breakdown here.
  • State-Level Initiatives:some text

European Union

  • AI Act (Provisional agreement reached in 2023): Categorizes AI systems by risk, imposing stricter requirements on high-risk applications, banning social scoring, and manipulative AI.
  • Governance and Enforcement: Establishes an AI Office within the European Commission for consistent enforcement and oversight across member states.

United Kingdom

  • Pro-Innovation Regulatory Framework: Balances innovation with risk management, supporting AI research and creating an AI Safety Institute.
  • AI Safety Summit (November 2023): Promotes international collaboration on AI safety.
  • Labour Government’s AI Regulation Plans:some text
    • Targeted AI Regulation: Focuses on regulating powerful AI models and banning explicit deepfakes.
    • Regulatory Innovation Office: Assists regulators in updating AI regulations and streamlining approval processes.
    • Support for Data Centers: Speeds up approval processes by designating them as Nationally Significant Infrastructure Projects.
    • National Data Library: Centralizes data resources to support public services.
    • Long-term R&D Funding: Commits to funding research and industry partnerships.

China

  • Algorithm Provisions (Effective since March 2023): Enhances transparency and accountability, requiring companies to disclose algorithmic principles and ensure fair practices.
  • Draft Deep Synthesis Provisions: Targets deepfake technology, requiring safety assessments and imposing strict content standards to prevent misinformation.

Brazil

  • Draft AI Law (In development): Focuses on risk classification, data subject rights, and governance measures for AI systems, aligning closely with the EU's AI Act.

Global Implications

To assess an organization's security posture as it relates to LLMs and AI, some key questions that should be considered in security and risk assessments include:

  • What types of sensitive information does the organization collect, store, and transmit, and how could this information be targeted by LLMs and AI-based attacks?
  • How are employees trained to identify and respond to potential threats from LLMs and AI, such as phishing attacks or deepfakes?
  • What technical controls are in place to detect and prevent LLM and AI-based attacks, such as network monitoring and intrusion detection systems?
  • Are there policies and procedures in place for the use of LLMs and AI within the organization, including access controls and monitoring requirements?
  • Are external vendors and partners subject to the same security standards as internal teams and are they adequately vetted before granting them access to sensitive data?

By including these questions in security and risk assessments, organizations can gain a better understanding of the potential risks posed by LLMs and AI, as well as identify areas where additional controls may be needed to protect against these threats. It is important to consider these risks seriously, as LLMs and AI are increasingly being used by cybercriminals to carry out sophisticated attacks that can be hard to detect or prevent.

No items found.
No items found.
No items found.
Pukar C. Hamal
Chief Executive Officer