MANAGING RISK IN DEPLOYING GENERATIVE AI APPLICATIONS

MANAGING RISK IN DEPLOYING GENERATIVE AI APPLICATIONS

  • Nebojsa Cukilo and Patrick Mihalcea
  • Published: 09 April 2024

Organizations looking to introduce generative AI use cases must strike a balance between risk and governance while also not hindering innovation. 

To embrace progress, it is key to define governance levels that work for each organization rather than pursuing a ‘one size fits all’ approach. It is crucial to recognize that no universal solution exists – instead, a balanced framework is needed, one that considers GenAI use cases in the context of different factors such as end users, risk management practices, mission objectives, and constraints within the AI ecosystem. We will explore how these elements interact in the pursuit of responsible and secure GenAI deployment.  


Risk considerations

Below are some of the AI specific risks that organizations need to assess when tapping into GenAI functionality:

  • Operational dependency
  • Hallucinations
  • Harmful content and bias
  • Individual privacy risk and data loss
  • Prompt injections
  • IP infringement. 

Selecting the Right Level of Governance

Organizations looking to implement GenAI should develop a governance framework by grouping scenarios based on the sophistication of cybersecurity risk management needed for successful implementations.

These groupings – or tiers – essentially expand upon the National Institute of Standards and Technology (NIST) framework for improving critical infrastructure cybersecurity. Projects align to each stage based on complexity, resources required for governance, risk tolerance, and the end users. 

Organizations should prioritize implementing GenAI transformations sequentially to gradually ramp up their governance capabilities to match the increasing complexity of implementations without overstepping their abilities and experiencing setbacks.  The more complex the scenario and the higher the risks to the organization, the higher level of governance tier required. 


Tier Progression Based on Project Factors:

Tier 1: Informed

  • GenAI applications are used in a controlled environment and the organization remains mindful of cybersecurity practices
  • An informal cybersecurity approach is considered, and practices are implemented where possible or necessary to meet minimum requirements of a proof of concept (POC) product
  • Users include developers and testers and limited internal users, who understand that applications leveraging this tier are not relied upon heavily for business processes due to an informal attention to cybersecurity 
  • Project requirements are simple, the cost of erroneous generated content is low, and users are highly encouraged to scrutinize outputs.

Tier 2: Standardized

  • Generative AI applications in this tier are following a standardized approach to cybersecurity 
  • More emphasis is placed on risk management practices and more resources are dedicated to the secure handling of data; audits are scheduled to ensure application is meeting requirements and standards set by the organization 
  • Users are internal only and rely on the tool for support but remain diligent regarding identifying bias, hallucinations, or copyright material. 

Tier 3: Proactive

  • Applications are the highest risk to an organization due to their high complexity, role in processing confidential data, and being exposed to many more users, including malicious actors
  • Projects in this tier are most commonly available to the public or the organization’s customers, which requires a high degree of attention to security and data governance
  • Organizations operating applications in this tier are proactive, agile and adaptive to meet rapidly changing standards and regulations in the AI and data security environment.

Putting it into Practice  

Let us consider an organization seeking to implement a customer service chatbot powered by GenAI. During the preliminary stages of development, the focus should be developing a model that considers the minimum requirements to meet proof of concept goals:

  • Basic attention is paid to ethical standards and principles in preparation for future stages
  • Preliminary handling policies and secure database architecture should be implemented. 
  • Transparency is more important because developers need to evaluate the model’s performance to achieve desired outputs
  • Risk assessment is minimal, as the project is still being used in a controlled environment
  • User-centricity and bias mitigation are not vital considerations. 

Progressing to Tier 2 may see internal employees exposed to the GenAI tool in order to assist with employee requests. Employees will need the assistant to maintain reliable security protocols for handling data and potentially information, hence the large increase in data governance and user centricity. 

Now that the tool is being used with higher stakes, greater transparency should be required and users need to be mindful of the associated risks. Risk assessments and audits should be routinely conducted to ensure the chatbot is outputting correct responses and has access to the most up-to-date data.

Finally, the customer service chatbot can be made available to the public or customer base, once more resources are allocated to bolstering cybersecurity. Posing a risk to quality of service and the organization’s reputation, the chatbot should be effectively subject to a robust governance framework to limit risk ensure effective handling of any incidents.   

The successful integration of GenAI applications hinges on finding the delicate equilibrium between risk and governance, not as obstacles but as enablers of progress. It is imperative to understand that a one-size-fits-all solution is unrealistic. 

We recommend that organizations looking to implement technological transformations in sequential order based on their complexity, resources needed, risk tolerance, and users. GenAI should focus on low complexity, high value tools that align with the Tier 1 governance level first, before approaching public-facing, high complexity projects that require Tier 2 or 3 alignment. 

At Capco, we're dedicated to helping businesses harness the power of innovative technology tailored to their needs while ensuring the utmost in security. Your journey into the future of AI starts with a conversation. Our team of professionals is ready to guide you through this transformative journey and leverage our accelerators to revolutionize your business.

 
© Capco 2025, A Wipro Company