Credo AI Policy Packs: Human Resources Startup compliance with NYC LL-144
Credo AI's Policy Pack for NYC LL-144 encodes the law바카라 사이트s principles into actionable requirements and adds a layer of reliability and credibility to compliance efforts.
Background & Description
In December 2021, the New York City Council passed (LL-144), mandating that AI and algorithm-based technologies used for recruiting, hiring, or promotion are audited for bias before being used.
The law also requires employers to annually conduct and publicly post independent audits to assess the statistical fairness of their processes across race and gender.
Credo AI created a for NYC LL-144 that encodes the law바카라 사이트s principles into actionable requirements and adds a layer of reliability and credibility to compliance efforts. An AI-powered HR talent-matching startup (바카라 사이트HR Startup바카라 사이트) used Credo AI바카라 사이트s Responsible AI Governance Platform and LL-144 Policy Pack to address this and other emerging AI Regulations.
This approach allows organisations to govern automated decision making tools used in hiring beyond NYC바카라 사이트s LL-144. Organisations using the Platform can map and measure bias of their systems and apply different policy packs, including custom policy packs that allows them to align with internal policies and meet regulatory requirements in different jurisdictions.
Relevant Cross-Sectoral Regulatory Principles
Safety, Security & Robustness
Under NYC LL-144, employers and employment agencies using automated employment decision tools need to provide a bias audit. The HR talent-matching startup used Credo AI바카라 사이트s Platform to perform a bias assessment for a tool that helps identify high-potential candidates for apprenticeship-based training and employment. This approach included defining context-driven governance requirements for AI systems by conducting technical assessments of data and models, generating governance artefacts, and providing human-in-the-loop reviews to effectively measure performance and robustness.
Appropriate Transparency & Explainability
NYC LL-144 requires organisations to publicly report their use of Artificial Intelligence and compliance with the regulations, which can be a complex and time-consuming task. Credo AI바카라 사이트s Platform Reports enabled the HR startup to generate a standardised LL-144 report with the custom add-on bias results they wanted to include in addition to the legal requirements.
Fairness
The HR Startup evaluated its models for fairness per the requirements outlined in our Policy Pack using the recommended fairness tools provided by Credo AI Control Docs. The HR Startup uploaded the results back to the Platform, which helped them see whether their results were within bounds of the 바카라 사이트바카라 사이트 or not.
Accountability & Governance
Reporting requirements, like those stipulated in NYC LL-144, enable accountability for AI systems behaviours and can help establish standards and benchmarks with respect to what 바카라 사이트good바카라 사이트 looks like. Many organisations are wary about sharing results about the behaviour of their AI systems externally바카라 사이트because they don바카라 사이트t know how their results might compare with those of competitors, or whether they are 바카라 사이트good바카라 사이트 or 바카라 사이트bad바카라 사이트 for external stakeholders.
Why we took this approach
The HR tech startup was able to produce a bias audit in compliance with New York City바카라 사이트s algorithmic hiring law by using Credo AI바카라 사이트s Platform and LL-144 Policy Pack. By using the Credo AI Platform to perform bias assessments and engage in third-party reviews, the talent-matching platform met NYC LL-144바카라 사이트s requirements and improved customer trust.
Aside from assessing organisations바카라 사이트 systems for LL-144 compliance, Credo AI바카라 사이트s human review of the assessment report identifies assessment gaps and opportunities that help increase its reliability and provide additional assurance to stakeholders. This third-party review provided the HR start up with insights and recommendations for bias mitigation and improved compliance.
Beyond NYC바카라 사이트s LL-144, this approach can be applicable to other regulatory regimes that aim to prevent discrimination from algorithm-based or automated decision tools. For example, enterprises looking to map and measure bias of protected categories under UK바카라 사이트s Equality Act, or produce bias audits as part of the risk management system required under the EU AI Act, can leverage Credo AI바카라 사이트s Platform and custom policy packs or EU AI Act바카라 사이트s high-risk AI system policy pack.
Benefits to the organisation using the technique
Utilising Credo AI바카라 사이트s Platform and NYC바카라 사이트s LL-144 Policy Pack allowed the HR Startup to streamline the implementation of technical evaluations for their data and models, while also facilitating the creation of compliance reports with human-in-the-loop review. This process also enabled the HR Startup to showcase their commitment to responsible AI practices to both clients and regulatory bodies, achieving full compliance with the LL-144 within two months.
Furthermore, by establishing an AI Governance process, the HR Startup is able to apply additional Policy Packs to comply with other emerging regulations.
Limitations of the approach
Demographic data such as gender, race, and disability is necessary for bias assessment and mitigation of algorithms. It helps discover potential biases, identify their sources, develop strategies to address them, and evaluate the effectiveness of those strategies. However, 바카라 사이트ground-truth바카라 사이트 demographic data is not always available for a variety of reasons. While many organisations do not have access to datasets leading to partial fairness evaluations, the HR Startup did have access to self-reported data. While self-reported demographic data directly reflects the individual바카라 사이트s own perspective and self-identification, has high accuracy, is explainable, and does not require proxy data it also has limitations. Such limitations include having incomplete or unrepresentative datasets due to privacy concerns or fear of discrimination, availability latency, and its potential for errors due to social desirability bias and misinterpretation.
It is important to remember that other demographic data collection approaches like human annotation and algorithmic inference also have limitations. Human-annotated demographic data source relies on a human annotator바카라 사이트s best perception of an individual바카라 사이트s demographic attributes and is subject to observer bias while algorithmic inference or machine inferred demographic data can further propagate biases in training data and models and has limited explainability.
Bias and fairness assessments of algorithm-based technologies used for recruiting, hiring, or promotion can only be as good as the data that is available
Further Links (including relevant standards)
Further AI Assurance Information
- For more information about other techniques visit the CDEI Portfolio of AI
Assurance Tools: /ai-assurance-techniques
- For more information on relevant standards visit the AI Standards Hub: