“Can companies leverage AI systems and AI-powered tools responsibly to support the implementation of corporate Human Rights Due Diligence (HRDD) - without undermining the very people and communities they aim to protect?”
This was the question we at the Indo-German Chamber of Commerce (IGCC) began asking in late 2023, during the development of SCAN-R (scan-r.info), our free digital tool designed to help SMEs take the first steps towards HRDD , in the context of its growing importance through Germany’s Supply Chain Act (LkSG) and the upcoming EU Corporate Sustainability Due Diligence Directive (CSDDD).
As we got into the development process, we discussed potential technologies that could be integrated to support the objective of the tool. Could the use of Artificial Intelligence (AI) be beneficial in driving the adoption of sustainable business practices? If so, can it be done effectively and responsibly? Not knowing the answers, we decided to model SCAN-R on a pre-decided algorithm-based framework instead.
However, these questions sparked over a year of investigative research into how Artificial Intelligence (AI) can be adopted innovatively while balancing ethical considerations. It quickly became clear to us that AI, while promising immense potential for efficiency and scale, also raised pressing ethical concerns. As more companieslook at integrating AI into their due diligence systems, the question becomes not just whether AI can support CHRDD processes, but whether it can be done without compromising human rightsstandards.
In today’s fast-changing business world, AI is quickly becoming part of everyday operations. From managing supply chains to improving customer service, AI has the power to make work faster and more efficient. But as useful as AI can be, its use in areas like Corporate Human Rights Due Diligence (CHRDD) calls on a more nuanced perspective.
CHRDD is the process companies use to identify, address and prevent risks related to human rights and the environment across their operations and supply chains. As supply chains become increasingly global and complex, it is more important than ever for businesses to stay on top of ethical concerns. AI can play a big role in this. For example, it can process vast amounts of data to help spot potential human rights issues, predict risks before they happen and take care of time-consuming tasks so that human experts can focus on bigger decisions. Yet, using AI in this context also brings its own set of challenges.
Our research initiative explored how AI tools are currently being used, what kinds of risks they create, how stakeholders are affected and what value they truly bring to companies. Our research uncovered several interesting discoveries – that while AI-powered tools can help accelerate due diligence efforts, their deployment, if not handled carefully, can create new risks, deepen existing inequalities and distance companies from the people their policies are meant to protect.
At the heart of CHRDD is the principle that companies must actively engage with the people and communities impacted by their operations. Human rights due diligence is not simply a data exercise. It requires empathy, critical judgement and engagement with stakeholders in context-rich environments. AI, by contrast, relies on patterns, data correlations and computational logic. When AI tools are used to make decisions in ethically sensitive domains such as risk assessments, remediation or stakeholder engagement, they may produce outcomes that appear objective but in fact reinforce blind spots or structural biases, especially when built on flawed or incomplete datasets.
In response to this challenge, we developed two analytical frameworks. The first is the AI-CHRDD Risk Pyramid, adapted from the EU AI Act’s four-level risk scale, which we expanded and applied specifically to AI interventions in corporate human rights and environmental due diligence. This model helped us evaluate the degree to which a given AI use case may negatively affect people, communities or the environment. The second model, the AI-CHRDD Value Pyramid, allowed us to assess how each AI application contributes positively to business processes, stakeholder relationships and internal policy development. We evaluated 29 real-world AI intervention scenarios using these frameworks, each grounded in actual or emerging corporate use cases.
Our findings were both illuminating and sobering. On one hand, certain AI tools showed real promise. For example, AI systems used for predictive risk assessments of child labour, updates to procurement policies based on supplier feedback and monitoring forced labour hotspots all demonstrated high value and low to moderate risk. These scenarios struck the right balance, enhancing companies’ ability to manage human rights risks while preserving stakeholder integrity and requiring meaningful human oversight.
On the other hand, some scenarios were identified as ethically unacceptable even when their technical performance appeared strong. In particular, five interventions were disqualified outright due to critical risk levels. These included AI-generated stakeholder impact prioritisation, AI-developed harassment prevention measures, AI-driven victim remedies and AI tools for interpreting and resolving grievances. In each of these cases, the use of AI was found to dehumanise processes, undermine stakeholder engagement and compromise accountability. No matter the potential benefits to the business, these tools failed the threshold of ethical defensibility and should not be adopted in CHRDD without fundamental changes and strong safeguards.
Throughout the scenarios, we observed recurring ethical challenges. Many AI models function as “black boxes,” making decisions that even their developers cannot easily explain. Others rely on data that contains embedded biases, whether regional, racial, gendered or economic. Some interventions showed tendencies to bypass or reduce stakeholder engagement, replacing human deliberation with algorithmic shortcuts. These issues, if left unaddressed, threaten to hollow out the very purpose of human rights due diligence itself.
We also recognised a longer-term concern: the loss of human judgement and critical thinking within companies as reliance on AI increases. CHRDD is not only about compliance; it is about fostering internal competence, ethical awareness and context-sensitive decision-making. The more businesses delegate complex assessments to AI, the more they risk deskilling their teams, eroding internal capacity and undermining their ability to respond to new legal, social or environmental challenges in the future.
To ensure responsible deployment of AI, we introduced the concept of the “Unknotted Pretzel” (UNP) checkpoint. Much like how pretzel dough must be shaped before it can be baked, AI-generated content must be evaluated, shaped and verified by experienced human rights professionals before it is used in decision-making. These UNP points serve as dedicated checkpoints where human oversight can intervene to correct errors, identify ethical red flags and uphold stakeholder dignity. This concept underscores our belief that AI cannot replace moral reasoning, cultural awareness or the right to be heard.
Finally, we call on companies to be transparent about how they use AI. This includes disclosing the tools they employ, the safeguards they apply and how human oversight is built into their decision-making processes. Transparency builds trust not just with regulators, but with workers, suppliers, communities and the public.
In conclusion, we at the Indo-German Chamber of Commerce believe that AI can play a meaningful role in CHRDD when it is grounded in transparency, oversight, stakeholder engagement and ethical accountability. Companies must carefully weigh value against risk and avoid the temptation to automate moral judgements or bypass human voices. By following the insights from our investigation, businesses can pursue innovation that is not only smart and scalable but also responsible and just. Only then can we ensure that technology serves people, rather than the other way around.
To explore our full findings, frameworks and real-world case scenarios in detail, we invite you to read the complete report. The full report is available for download here: