
Human-in-the-Loop – A Business Leaders Guide to Responsible AI
A practical guide for business leaders on implementing Human-in-the-Loop (HITL) frameworks for responsible AI governance, risk reduction, compliance, and buildi...
KPMG’s AI Risk and Controls Guide provides organizations with a structured, ethical approach to managing AI risks, supporting responsible deployment and compliance with global standards.
This first stat may be from last year, but it couldn’t be more relevant today. According to KPMG’s 2024 U.S. CEO Outlook, a striking 68% of CEOs identified AI as a top investment priority. They’re counting on it to boost efficiency, upskill their workforce, and fuel innovation across their organizations.
That’s a huge vote of confidence in AI — but it also raises an important question: with so much at stake, how do organizations ensure they are using AI responsibly and ethically?
This is where the KPMG AI Risk and Controls Guide comes in. It offers a clear, practical framework to help businesses embrace AI’s potential while managing the real risks it brings. In today’s landscape, building trustworthy AI isn’t just good practice — it’s a business imperative.
Artificial Intelligence (AI) is revolutionizing industries, unlocking new levels of efficiency, innovation, and competitiveness. Yet with this transformation comes a distinct set of risks and ethical challenges that organizations must manage carefully to maintain trust and ensure responsible use. The KPMG AI Risk and Controls Guide is designed to support organizations in navigating these complexities, providing a practical, structured, and values-driven approach to AI governance.
Aligned with KPMG’s Trusted AI Framework, this guide helps businesses develop and deploy AI solutions that are ethical, human-centric, and compliant with global regulatory standards. It is organized around 10 foundational pillars, each addressing a critical aspect of AI risk management:
By focusing on these pillars, organizations can embed ethical principles into every phase of the AI lifecycle—from strategy and development to deployment and monitoring. This guide not only enhances risk resilience but also fosters innovation that is sustainable, trustworthy, and aligned with societal expectations.
Whether you are a risk professional, executive leader, data scientist, or legal advisor, this guide provides essential tools and insights to help you responsibly harness the power of AI.
The KPMG AI Risk and Controls Guide serves as a specialized resource to help organizations manage the specific risks linked to artificial intelligence (AI). It acknowledges that while AI offers significant potential, its complexities and ethical concerns require a focused approach to risk management. The guide provides a structured framework to tackle these challenges in a responsible and effective manner.
The guide is not intended to replace current systems but is designed to complement existing risk management processes. Its main goal is to incorporate AI-specific considerations into an organization’s governance structures, ensuring smooth alignment with current operational practices. This approach allows organizations to strengthen their risk management capabilities without needing to completely redesign their frameworks.
The guide is built on KPMG’s Trusted AI framework, which promotes a values-driven and human-centered approach to AI. It integrates principles from widely respected standards, including ISO 42001, the NIST AI Risk Management Framework, and the EU AI Act. This ensures the guide is both practical and aligned with globally recognized best practices and regulatory requirements for AI governance.
The guide offers actionable insights and practical examples tailored to address AI-related risks. It encourages organizations to adapt these examples to their specific contexts, considering variables like whether the AI systems are developed in-house or by vendors, as well as the types of data and techniques used. This adaptability ensures the guide remains relevant for various industries and AI applications.
The guide focuses on enabling organizations to deploy AI technologies in a safe, ethical, and transparent manner. By addressing the technical, operational, and ethical aspects of AI risks, it helps organizations build trust among stakeholders while leveraging AI’s transformative capabilities.
The guide acts as a resource to ensure AI systems align with business objectives while mitigating potential risks. It supports innovation in a way that prioritizes accountability and responsibility.
The KPMG AI Governance Guide is designed for professionals managing AI implementation and ensuring it is deployed safely, ethically, and effectively. It applies to teams across various areas within organizations, including:
C-suite executives and senior leaders, such as CEOs, CIOs, and CTOs, will find this guide helpful for managing AI as a strategic priority. According to KPMG’s 2024 US CEO Outlook, 68% of CEOs consider AI a key investment area. This guide enables leadership to align AI strategies with organizational objectives while addressing associated risks.
Software engineers, data scientists, and others responsible for creating and deploying AI solutions can use the guide to incorporate ethical principles and robust controls directly into their systems. It focuses on adapting risk management practices to the specific architecture and data flows of AI models.
The guide is adaptable for businesses developing AI systems in-house, sourcing them from vendors, or using proprietary datasets. It is especially relevant for industries such as finance, healthcare, and technology, where advanced AI applications and sensitive data are critical to operations.
Deploying AI without a clear governance framework can lead to financial, regulatory, and reputational risks. The KPMG guide works with existing processes to provide a structured, ethical approach to managing AI. It promotes accountability, transparency, and ethical practices, helping organizations use AI responsibly while unlocking its potential.
Organizations should start by linking AI-specific risks to their current risk taxonomy. A risk taxonomy is a structured framework used to identify, organize, and address potential vulnerabilities. Since AI introduces unique challenges, traditional taxonomies need to expand to include AI-specific factors. These factors might involve data flow accuracy, the logic behind algorithms, and the reliability of data sources. By doing this, AI risks become part of the organization’s broader risk management efforts rather than being treated separately.
The guide points out the need to assess the entire lifecycle of AI systems. Important areas to examine include where data originates, how it moves through processes, and the foundational logic of the AI model. Taking this broad view helps you pinpoint where vulnerabilities may occur during the development and use of AI.
AI systems differ based on their purpose, development methods, and the type of data they use. Whether a model is created in-house or obtained from an external provider greatly affects the risks involved. Similarly, the kind of data—whether proprietary, public, or sensitive—along with the techniques used to build the AI, requires customized risk management strategies.
The guide suggests adapting control measures to match the specific needs of your AI systems. For example, if you rely on proprietary data, you may need stricter access controls. On the other hand, using an AI system from a vendor might call for in-depth third-party risk assessments. By tailoring these controls, you can address the specific challenges of your AI systems more effectively.
The guide recommends incorporating risk management practices throughout every stage of the AI lifecycle. This includes planning for risks during the design phase, setting up strong monitoring systems during deployment, and regularly updating risk evaluations as the AI system evolves. By addressing risks at each step, you can reduce vulnerabilities and ensure that your AI systems are both ethical and reliable.
Taking the initial step of aligning AI risks with your existing risk taxonomy and customizing controls based on your needs helps establish a solid foundation for trustworthy AI. These efforts enable organizations to systematically identify, evaluate, and manage risks, building a strong framework for AI governance.
The KPMG Trusted AI Framework is built on ten key pillars that address the ethical, technical, and operational challenges of artificial intelligence. These pillars guide organizations in designing, developing, and deploying AI systems responsibly, ensuring trust and accountability throughout the AI lifecycle.
Human oversight and responsibility should be part of every stage of the AI lifecycle. This means defining who is responsible for managing AI risks, ensuring compliance with laws and regulations, and maintaining the ability to intervene, override, or reverse AI decisions if needed.
AI systems should aim to reduce or eliminate bias that could negatively impact individuals, communities, or groups. This involves carefully examining data to ensure it represents diverse populations, applying fairness measures during development, and continuously monitoring outcomes to promote equitable treatment.
Transparency requires openly sharing how AI systems work and why they make specific decisions. This includes documenting system limitations, performance results, and testing methods. Users should be notified when their data is being collected, AI-generated content should be clearly labeled, and sensitive applications like biometric categorization must provide clear user notifications.
AI systems must provide understandable reasons for their decisions. To achieve this, organizations should document datasets, algorithms, and performance metrics in detail, enabling stakeholders to analyze and reproduce results effectively.
The quality and reliability of data during its entire lifecycle—collection, labeling, storage, and analysis—are essential. Controls should be in place to address risks like data corruption or bias. Regularly checking data quality and performing regression tests during system updates helps maintain the accuracy and reliability of AI systems.
AI solutions must follow privacy and data protection laws. Organizations need to handle data subject requests properly, conduct privacy impact assessments, and use advanced methods like differential privacy to balance data usability with protecting individuals’ privacy.
AI systems should perform consistently according to their intended purpose and required accuracy. This requires thorough testing, mechanisms to detect anomalies, and continuous feedback loops to validate system outputs.
Safety measures protect AI systems from causing harm to individuals, businesses, or property. These measures include designing fail-safes, monitoring for issues like data poisoning or prompt injection attacks, and ensuring systems align with ethical and operational standards.
Strong security practices are necessary to protect AI systems from threats and malicious activities. Organizations should conduct regular audits, perform vulnerability assessments, and use encryption to safeguard sensitive data.
AI systems should be designed to minimize energy use and support environmental goals. Sustainability considerations should be included from the beginning of the design process, with ongoing monitoring of energy consumption, efficiency, and emissions throughout the AI lifecycle.
By following these ten pillars, organizations can create AI systems that are ethical, trustworthy, and aligned with societal expectations. This framework provides a clear structure for managing AI challenges while promoting responsible innovation.
Data integrity is critical for ensuring AI systems remain accurate, fair, and reliable. Poor data management can lead to risks like bias, inaccuracy, and unreliable results. These issues can weaken trust in AI outputs and cause major operational and reputational problems. The KPMG Trusted AI framework highlights the need to maintain high-quality data throughout its lifecycle to ensure AI systems function effectively and meet ethical standards.
Without strong data governance, AI systems may produce flawed results. Issues such as incomplete, inaccurate, or irrelevant data can lead to biased or unreliable outputs, increasing risks across different AI applications.
Data often moves between systems for activities like training, testing, or operations. If these transfers are not handled properly, data may become corrupted, lost, or degraded. This can impact how AI systems perform.
To improve data governance, organizations can:
To minimize risks during data transfers, organizations should:
Using continuous monitoring systems helps maintain data integrity throughout the AI lifecycle. These systems can detect problems such as unexpected changes in dataset quality or inconsistencies in data handling. This allows for quick corrective actions when issues arise.
Maintaining data integrity is essential for deploying trustworthy AI systems. Organizations can reduce risks by establishing strong governance frameworks, protecting data interactions, and maintaining continuous validation processes. These actions improve the reliability of AI outputs while ensuring ethical and operational standards are met, helping build trust in AI technologies.
Managing requests related to data subject access is a major privacy challenge in AI. Organizations must make sure individuals can exercise their rights to access, correct, or delete personal information under laws like GDPR and CCPA. If these requests are not handled properly, it may lead to violations of regulations, a loss of consumer trust, and harm to the organization’s reputation.
To reduce this risk, companies should create programs to educate individuals about their data rights when interacting with AI. Systems must be set up to process these requests quickly and transparently. Organizations should also keep detailed records of how they handle these requests to prove compliance during audits.
AI systems often handle sensitive personal information, which makes them attractive targets for cyberattacks. If a breach occurs, it can cause significant regulatory fines, damage to a company’s reputation, and a loss of customer trust.
To combat this, the KPMG Trusted AI framework suggests conducting ethical reviews for AI systems that use personal data to ensure they meet privacy regulations. Regular data protection audits and privacy impact assessments (PIAs) are also necessary, especially when sensitive data is used for tasks like training AI models. Additionally, methods such as differential privacy, which adds statistical noise to data, can help anonymize information while still allowing for analysis.
AI systems that do not include privacy safeguards from the start can create serious issues. Without applying privacy-by-design principles, organizations risk exposing sensitive data or failing to comply with legal requirements.
Companies should include privacy measures during the development stages of AI systems. This involves following privacy laws and data protection regulations through strong data management practices. Clear documentation of how data is collected, used, and stored is crucial. Organizations must also get explicit user consent for data collection and processing, especially in sensitive areas like biometric data.
When AI systems do not clearly explain how user data is handled, it can result in mistrust and legal scrutiny. Users should know when their data is collected and how it is being used
The KPMG AI Risk and Controls Guide is a practical framework designed to help organizations manage the unique risks of AI, ensuring responsible, ethical, and compliant AI deployment across industries.
The guide is built on ten key pillars: Accountability, Fairness, Transparency, Explainability, Data Integrity, Reliability, Security, Safety, Privacy, and Sustainability—each addressing critical aspects of AI risk management.
The guide is intended for risk professionals, compliance teams, cybersecurity specialists, legal advisors, executives, AI developers, engineers, and organizations of all sizes seeking to manage AI responsibly.
It aligns with global standards such as ISO 42001, NIST AI Risk Management Framework, and the EU AI Act, helping organizations integrate AI-specific controls into existing governance processes and meet regulatory requirements.
It suggests measures like strong data governance, privacy-by-design, continuous monitoring, transparency in AI decisions, anomaly detection, feedback loops, and sustainability goals to reduce AI-related risks.
Discover how KPMG’s AI Risk and Controls Guide can help your organization embrace AI innovation while ensuring ethical, secure, and compliant deployment.
A practical guide for business leaders on implementing Human-in-the-Loop (HITL) frameworks for responsible AI governance, risk reduction, compliance, and buildi...
AI misuse at work isn’t an employee problem—it’s a leadership crisis. Explore why employees turn to AI tools in secret, the risks involved, and how leadership m...
Explore why AI leadership is essential for organizational success, how strong leaders drive AI transformation, and how FlowHunt's AI Leadership Workshop equips ...