Research Hub > Important Security Considerations for Embracing AI

September 15, 2023

Article
5 min

Important Security Considerations for Embracing AI

As artificial intelligence (AI) continues to rapidly disrupt existing technology and industries of all types, it is important for businesses to consider the security risks before welcoming new AI tools and technology with open arms.

The pace at which businesses are adopting artificial intelligence (AI) is breathtaking. Recent advancements in generative AI (GenAI), a subset of AI capable of creating new content and predictive models, are transforming corporate business operations. However, the rapid acceleration and adoption of this technology presents unique challenges.

As AI adoption accelerates, businesses face complex challenges, including security, privacy, ethical considerations and regulatory compliance. For cybersecurity programs and chief information security officers (CISOs), the stakes are higher. The sophistication of GenAI introduces new cyber threat avenues, requiring CISOs to balance the benefits of this powerful technology while protecting against evolving threats.

Top 3 AI Security Challenges

1. Enabling Rapid Adoption of AI Technologies

The first challenge lies in enabling the rapid adoption of AI technologies within the organization. As AI promises significant improvements in productivity and operational efficiency, businesses are rapidly integrating these new tools into their workflows.

However, this swift adoption comes with the need to create new policies and update existing policies to account for the unique aspects of AI, manage resiliency in the face of potential AI disruptions, and address supply chain and third-party risks linked to these new technologies. Enabling an organization to adopt AI confidently and rapidly requires a clear strategy that balances the drive for innovation with prudent risk management.

2. Securing the Use of AI Across the Organization

The second challenge revolves around ensuring the secure use of AI technologies across various functions within the organization, including but not limited to security and IT tooling. Aspects like access control, data security and privacy take center stage in this challenge.

Ensuring the secure use of AI also includes enabling its adoption in areas like DevOps and product development, as well as in various support and administrative tools across the organization. The goal is to guarantee that all AI implementations, regardless of where they are used within the organization, adhere to robust security standards to mitigate potential threats and vulnerabilities.

There are three primary pillars of securing AI use:

  • Adding AI use cases to your security program: Discovering ways to securely enable newly emergent business use cases for AI, e.g., “How do we safely allow marketing to use ChatGPT?”
  • Adding AI technology to your tool stacks: Leveraging AI to improve your tool sets and capabilities, e.g., AI-powered threat intelligence.
  • Models as the attack surface: Enabling developers to confidently innovate new AI-driven products without fear of risk, e.g., prompt injection and AI model theft.

3. Defending Against AI-Driven Attacks

Lastly, the defense against AI-driven attacks represents the third major challenge. The sophistication and adaptive nature of AI make it a potent tool for cybercriminals, leading to the emergence of AI-powered threats. There are two primary categories of such attacks — inbound and outbound AI attacks.

Inbound AI attacks

Inbound AI attacks occur when your organization is the target, with AI-generated or accelerated attacks directed toward your network, assets or personnel. An example of this is intelligent malware that adapts and evolves to infiltrate defenses.

Outbound AI attacks

Outbound AI attacks involve the organization being the subject of the attack, with AI-generated content or attacks sent outside your network. Examples include the creation of synthetic media for purposes like extortion or disinformation campaigns.

The difference between these categories lies less in the impact and more in the approach. Many outbound AI scenarios cannot currently be addressed solely through technical controls, indicating the need for comprehensive strategies encompassing policy, awareness and technology controls to effectively counter these threats.

GenAI Security Best Practices

Here are some tips to help your organization safely incorporate GenAI into operations:

  • Conduct a thorough risk assessment to identify potential regulatory compliance risks associated with GenAI use. This should include an analysis of any applicable regulations or standards, such as General Data Protection Regulation (GDRP), Health Insurance Portability and Accountability Act (HIPAA), or Payment Card Industry Data Security Standard (PCI DSS).
  • Conduct a Business Impact Analysis (BIA) to identify potential risks and value of GenAI integration to the business.
  • Develop and implement appropriate policies and procedures for GenAI use, including an acceptable use policy.
  • Implement proper access controls to limit access to GenAI and their underlying data only to those who have a legitimate need to access it. This should include the use of multi-factor authentication, role-based access controls, and least privilege access principles.
  • Regularly monitor GenAI activity and usage to identify any suspicious or unauthorized behavior using security information and event management (SIEM) tools, log analysis, and real-time monitoring.
  • Conduct regular vulnerability assessments and penetration testing to identify and address any potential security weaknesses in GenAI systems.
  • Implement data security controls, including data loss prevention and cloud access security brokers, to monitor and prevent the unauthorized capture or transfer of sensitive data.
  • Ensure that all GenAI-related data is properly encrypted in transit and at rest, and that proper encryption key management practices are in place.
  • Use secure coding practices, such as input validation, error handling, and secure communication protocols, and embed GenAI development controls in your software development lifecycle (SDLC).
  • Limit access to GenAI APIs, ensuring only authorized users and systems have access, and implement authentication and access control mechanisms.
  • Ensure vendor security, with appropriate security controls and protocols in place to protect data for third-party GenAI providers.
  • Train users to use GenAI securely, such as avoiding sharing sensitive information or clicking on suspicious links.
  • Include GenAI use cases in your incident response plan, outlining the steps to take in the event of a security incident, and regularly review and update the plan as needed.
  • Include GenAI use cases in your business continuity or disaster recovery plans, developing redundancy and failover plans, and regularly review and update the plans as needed.

Balancing AI’s Challenges and Opportunities

In light of the security challenges outlined above, businesses need to recalibrate their approach to policy creation, risk management and cybersecurity controls. For instance, a robust AI policy could be designed to not only comply with regulations but also to anticipate and address potential ethical dilemmas, like those related to privacy and data use.

In this rapidly changing landscape, having a strategic, proactive and informed approach is more critical than ever. The journey ahead is challenging but also full of opportunities for those prepared to navigate it.

Finding a trusted partner with robust cybersecurity expertise, such as CDW, can help you establish policies, rules and training to help your organization unlock the transformative potential of AI while effectively mitigating the associated risks.

As we gaze into the future, we aim to unravel the potential ethical, strategic, and operational challenges and opportunities these trends could bring.

Story by Walt Powell

Walt Powell

Lead Field CISO
Walt Powell is the Lead Field CISO at CDW, specializing in providing executive guidance around risk, governance, compliance and IT security strategies.