Algorithmic Discrimination

Topic

A legal concept introduced in Colorado's AI law, defined as any AI-driven decision that results in a disparate impact on protected classes like race or age.


First Mentioned

10/4/2025, 5:08:52 AM

Last Updated

10/4/2025, 5:13:14 AM

Research Retrieved

10/4/2025, 5:13:14 AM

Summary

Algorithmic discrimination, also known as algorithmic bias, refers to systematic and repeatable harmful tendencies in computerized systems that lead to unfair outcomes, often favoring one group over another. This bias can stem from various factors including algorithm design, data collection, coding, selection, or training, as well as unintended uses. It has been observed in areas like search engine results, social media, criminal justice, healthcare, and hiring, leading to issues such as privacy violations, reinforcement of societal biases (race, gender, sexuality, ethnicity), and even wrongful arrests due to technologies like facial recognition. Sociologists are particularly concerned about algorithms, often perceived as neutral, impacting the physical world by organizing society and behavior, sometimes projecting undue authority due to automation bias. Researching this bias is challenging due to the proprietary nature and complexity of algorithms, their dynamic changes, and their existence as complex networks rather than single entities. Recent regulatory efforts, such as California's SB53 and Colorado's SB24-205, introduce the concept of algorithmic discrimination, prompting discussions about the need for unified federal regulation to prevent economic fragmentation and maintain US competitiveness.

Referenced in 1 Document
Research Data
Extracted Attributes
  • Definition

    Systematic and repeatable harmful tendencies in computerized sociotechnical systems that create unfair outcomes, privileging one category over another.

  • Observed in

    Search engine results, social media platforms, criminal justice, healthcare, hiring, election outcomes, spread of online hate speech.

  • Consequences

    Unfair outcomes, privacy violations, reinforcement of social biases (race, gender, sexuality, ethnicity), wrongful arrests (e.g., due to facial recognition inaccuracies), displacement of human responsibility.

  • Causes of Bias

    Algorithm design, data collection, coding, selection, training data, unintended/unanticipated uses, pre-existing cultural/social/institutional expectations, feature/label selection, technical limitations, use in unanticipated contexts.

  • Impact on Hiring

    Shifts focus of statistical discrimination theory from traditional to intelligent hiring, relies on historical data of specific populations.

  • Proactive Measures

    Testing for discrimination before deployment, designing for equity, ongoing monitoring and mitigation of unforeseen inequities.

  • Regulatory Principle

    Governed by the equality principle, forbidding discrimination in general.

  • Challenges in Research

    Proprietary nature of algorithms (trade secrets), inherent complexity, dynamic changes making reproduction difficult, algorithms existing as networks of programs.

  • Mechanism of Discrimination

    Explicit discriminatory intent (human bias masked by algorithms), feature selection and weighting in decision-making, biased data.

  • Types of Bias (2021 Survey)

    Historical bias, representation bias, measurement bias.

Timeline
  • European Union's General Data Protection Regulation (GDPR) proposed, beginning to address algorithmic bias in legal frameworks. (Source: Wikipedia)

    2018-XX-XX

  • A survey identified multiple forms of algorithmic bias, including historical, representation, and measurement biases. (Source: Wikipedia)

    2021-XX-XX

  • European Union's Artificial Intelligence Act proposed, further addressing algorithmic bias. (Source: Wikipedia)

    2021-XX-XX

  • European Union's Artificial Intelligence Act approved. (Source: Wikipedia)

    2024-XX-XX

  • California's SB53 introduces the concept of algorithmic discrimination. (Source: related_documents)

    XXXX-XX-XX

  • Colorado's SB24-205 introduces the concept of algorithmic discrimination. (Source: related_documents)

    XXXX-XX-XX

Algorithmic bias

Algorithmic bias describes systematic and repeatable harmful tendency in a computerized sociotechnical system to create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm. Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's General Data Protection Regulation (proposed 2018) and the Artificial Intelligence Act (proposed 2021, approved 2024). As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of automation bias), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design. Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service. A 2021 survey identified multiple forms of algorithmic bias, including historical, representation, and measurement biases, each of which can contribute to unfair outcomes.

Web Search Results
  • Algorithmic discrimination: examining its types and regulatory ...

    By evaluating pertinent U.S. laws and state court decisions, it can be concluded that algorithmic discrimination is governed by the equality principle, which forbids discrimination in general. It highlights the need for algorithmic designers to abide by current laws that provide equal protection against discrimination to citizens and consumers (Janssen and Kuk, 2016). [...] Explicit discriminatory intent: In the first case, an algorithm user makes a decision by considering membership of a protected group and intentionally changes some aspect of the algorithm or its components to produce a biased result. In this case, algorithmic discrimination is actually just human bias masked by algorithms (Wachter, 2022). For example, if a bank intentionally denies a loan to an applicant of a certain ethnic group, even though they meet the lending criteria, it is a clear [...] Algorithmic discrimination can also arise from the way features are selected and weighted in the decision-making process. This type of discrimination is closely related to the problem of biased data, but it specifically involves the choices made by algorithm designers in determining which attributes to include and how to prioritize them (Selbst and Barocas, 2016). For example, a college admissions algorithm that heavily weights standardized test scores may discriminate against students from

  • Ethics and discrimination in artificial intelligence-enabled ... - Nature

    ")), and Jackson (2021 Artificial intelligence & algorithmic bias: the issues with technology reflecting history & humans. J Bus Technol Law 16:299")) suggest that the reason for algorithmic discrimination is related to data selection. Data collection tends to prefer accessible, €œmainstream€ organizations unequally dispersed by race and gender. Inadequate data will screen out groups that have been historically underrepresented in the recruitment process. Predicting future [...] The digital economy has witnessed the application of various artificial intelligence technologies in the job market. Consequently, the issue of algorithmic hiring discrimination has emerged, shifting the focus of statistical discrimination theory from traditional hiring to intelligent hiring. The mechanisms that give rise to hiring discrimination problems remain similar, as both rely on historical data of specific populations to predict future hiring outcomes. [...] Thirdly, we take a comprehensive approach that considers technical and managerial aspects to tackle discrimination in algorithmic hiring. This study contends that resolving algorithmic discrimination in recruitment requires technical solutions and the implementation of internal ethical governance and external regulations.

  • + Algorithmic Discrimination Protections | OSTP | The White House

    , religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive [...] Any automated system should be tested to help ensure it is free from algorithmic discrimination before it can be sold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly construed. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The expectations set out below describe proactive technical and policy steps that can be taken to not only reinforce those legal protections but extend beyond them to [...] Ongoing monitoring and mitigation. Automated systems should be regularly monitored to assess algorithmic discrimination that might arise from unforeseen interactions of the system with inequities not accounted for during the pre-deployment testing, changes to the system after deployment, or changes to the context of use or associated data. Monitoring and disparity assessment should be performed by the entity deploying or using the automated system to examine whether the system has led to

  • AI bias: exploring discriminatory algorithmic decision-making ...

    A case will be created highlighting the discrimination issue in algorithmic decision-making using two case studies which clearly show the presence of well-documented biases (based on race and gender) with the application of a suggested model to conduct a bias impact assessment. In the subsequent sections, the problems associated with data collection will be introduced, suggesting three possible tools (four-stage implementation, boxing method and a more practical application of the protected [...] 8. Algorithmic bias. Algorithmic bias is when the bias is not actually in the input data and is created by the algorithm . This article, as it is common in AI ethics literature, will concentrate on the problematic cases in which the outcome of bias may lead to discrimination by AI-based automated decision-making environments and an awareness of the different types can be helpful. Algorithmic decision-making that discriminates and the problem with data [...] AI Bias is when the output of a machine-learning model can lead to the discrimination against specific groups or individuals. These tend to be groups that have been historically discriminated against and marginalised based on gender, social class, sexual orientation or race, but not in all cases. This could be because of prejudiced assumptions in the process of developing the model, or non-representative, inaccurate or simply wrong training data. It is important to highlight that bias means a

  • What Is Algorithmic Bias? | IBM

    The Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence sets guidelines for AI development and use, including addressing algorithmic discrimination through training, technical assistance and coordination between the the US Department of Justice and federal civil rights offices. [...] The White House’s Blueprint for an AI Bill of Rights has a principle dedicated to algorithmic discrimination protections. It includes expectations and guidance on how to put this principle into practice.