top of page

DOJ’s Updated Evaluation of Corporate Compliance Programs: AI Compliance in Focus

Updated: Sep 30

On September 23, 2024, the U.S. Department of Justice (DOJ) updated its Evaluation of Corporate Compliance Programs (ECCP), highlighting the increasing importance of addressing risks associated with emerging technologies, particularly Artificial Intelligence (AI). This update is highly relevant for healthcare enterprises implementing AI solutions, as it emphasizes the need for robust compliance programs that address AI-related risks and align with DOJ expectations.



Washington DC


What is the ECCP ?


The ECCP is a set of guidelines used by DOJ prosecutors to evaluate the effectiveness of corporate compliance programs during investigations. First established in 2017, it serves as a benchmark for assessing whether a company has established, implemented, and maintained adequate compliance protocols. The ECCP considers various factors, including program design, implementation, and whether the compliance function is empowered and resourced adequately. The ECCP, originally designed for prosecutors, offers invaluable insights for companies seeking to understand the DOJ's expectations for compliance programs. By evaluating enterprise compliance programs against ECCP criteria, companies can increase their chances of a favorable outcome in the event of an enforcement action, including reduced penalties and less stringent compliance requirements.


Key ECCP Requirements for AI


The latest ECCP revisions underscore the DOJ’s heightened scrutiny of how companies manage AI-related risks. Prosecutors are now instructed to assess a company's approach to AI, examining several critical areas:

  1. Risk Assessment and Mitigation: Companies must evaluate AI-related risks, including the potential impact on compliance with criminal laws, and implement steps to mitigate these risks in both commercial operations and compliance programs.

  2. Incorporating Lessons Learned in Policies and Procedures: Updated policies and procedures should address emerging AI risks, including those stemming from the deployment and use of AI in compliance and commercial contexts. Policies, procedures, and employee training should reflect the evolving landscape of AI risk mitigation within the company but also from the broader industry.

  3. Governance and Monitoring: Companies must establish governance mechanisms to oversee AI use, ensuring ongoing monitoring of AI systems to identify any negative or unintended consequences.

  4. Training: Employees should be trained on the proper use of AI technologies, with an emphasis on understanding the risks associated with AI implementation.

  5. Incentivizing and Protecting Whistleblowers: The updated Evaluation of Corporate Compliance Programs (ECCP) underscores the Department of Justice's (DOJ) expectation that corporations should actively foster a culture that encourages internal whistleblowing and safeguards individuals who report misconduct. In future evaluations, the DOJ will scrutinize companies' policies, training programs, and practices to determine whether they effectively promote whistleblowing, prevent retaliation, and demonstrate a genuine commitment to protecting employees who report wrongdoing.

  6. Data Access and Analytics: Compliance teams should have access to relevant data and analytics tools to enhance their ability to monitor AI systems effectively and assess their impact on compliance efforts.


Risks Associated with New and Emerging Technology


In a significant shift, the Department of Justice (DOJ) has issued a stark warning to businesses: the deliberate misuse of artificial intelligence (AI) for white-collar crimes will not be tolerated. This directive, issued by Deputy Attorney General Lisa Monaco in March 2024, has led to the integration of AI risk assessment into the Evaluation of Corporate Compliance Programs (ECCP).


The ECCP now mandates that prosecutors evaluate how companies measure and manage the risks associated with emerging technologies, including AI, both in their operations and compliance programs. This includes:

  • Risk Assessment: Companies must assess the risks AI poses to their compliance with criminal laws and take proactive steps to mitigate these risks.

  • Policy Development: The development and regular updating of policies and procedures to address emerging AI-related risks is essential.

  • Governance and Monitoring: A robust governance framework for AI, including monitoring and enforcement, is a key requirement.

  • Control Mechanisms: Companies must implement controls to monitor the reliability of AI systems and address potential negative or unintended consequences.

  • Employee Training: Training employees on the use of AI is vital to ensure compliance and mitigate risks.

To meet these DOJ expectations, companies must first understand how AI is deployed within their operations, assess the unique risks it presents, and establish comprehensive policies and procedures to mitigate these risks. A rigorous framework for managing AI risks is essential, and it must be regularly reviewed and updated to ensure effective implementation and monitoring. 


How ALIGNMT AI Can Help Healthcare Enterprises


ALIGNMT AI’s platform is designed to help healthcare organizations navigate the complexities of AI compliance by providing automated platform solutions tailored to meet evolving regulatory standards like the DOJ’s updated ECCP. Here's how ALIGNMT AI can support compliance efforts:

  1. Risk Management and Assessment: ALIGNMT AI offers tools and controls that continuously assess AI-related risks in healthcare operations, helping organizations proactively manage compliance and prevent misuse of AI.

  2. Policy Automation: The ALIGNMT AI SaaS platform automates the creation and updating of compliance policies and procedures, ensuring they stay aligned with the latest regulations and guidance, including the ECCP.

  3. Governance and Monitoring Solutions: ALIGNMT AI provides robust governance features, including monitoring controls, that allow healthcare enterprises to maintain continuous oversight over AI deployments, ensuring ongoing compliance and mitigating potential risks associated with AI.

  4. Training Modules: ALIGNMT AI includes training resources to educate employees on AI risks, enhancing their understanding and compliance awareness as per DOJ’s expectations.

  5. Data Access and Integration: The ALIGNMT AI platform enables compliance teams to leverage data analytics, offering insights that help identify potential compliance issues early and measure program effectiveness.


Staying Ahead of Regulatory Requirements


As the DOJ continues to refine its expectations around AI compliance, healthcare enterprises must remain vigilant. By leveraging ALIGNMT AI, healthcare enterprises can establish a proactive, data-driven approach to AI governance, aligning their compliance programs with the DOJ’s latest ECCP updates, and demonstrating a commitment to maintaining ethical and legal standards in their use of emerging technologies.


For more details on the DOJ’s updates, you can explore the complete guidance here.



Comments


bottom of page