17.11.2023
How I Found Out that ‘Bias’ Was Not a Dirty Word. The Interpretation of ‘Bias’ in the Light of the AI Act Proposal
As most lawyers working in the field of IT law, I have also diligently tracked the evolution of the AI Act Proposal. While the lengthy Regulation presents a multitude of facets open to discussion, consensus resonates across the legal community on certain key points. Among these is the unanimous understanding that AI systems should be human-centric, and the shared belief in the necessity to proactively regulate certain uses and impose outright prohibitions in specific instances. An overarching point of agreement centres around the imperative task of addressing and mitigating bias in AI outputs. This collective concern extends to ensuring accuracy, fortifying robustness, implementing human oversight, and establishing sound data governance practices.
The Interpretation Dilemma
Coming from the realm of human rights law, I ardently resonate with the concern surrounding bias as an inherent negative to be diligently avoided. While the collaborative efforts of multiple stakeholders and experts from diverse fields have contributed to the creation of the AI Act Proposal, it is essential to recognise that, ultimately, its application and, more significantly, its interpretation will rest in the hands of legal professionals. This presents a looming challenge that warrants prompt attention. Why a challenge? Because, as lawyers, we operate within the framework of stringent rules when dissecting and interpreting every word within a given legal instrument. Our objective is to discern the precise meaning and legislative intent, upholding legal certainty and predictability. While these rules of interpretation share commonalities, variations exist among jurisdictions. For instance, certain countries codify their interpretative rules in laws (as seen in Bulgaria), while others derive them from case law (as exemplified by the UK) or legal doctrine (as observed in France).
By comparison, the EU has traversed a considerable journey since the 1960s when Walter Hallstein envisioned it as a ‘community of law.’ However, despite the prolific workload of the CJEU (preciously ECJ) in handling preliminary references, the methods of interpretation remain somewhat ambiguous. Take, for example, the preference of teleological rather than linguistic interpretation. While this preference stems from factors like multilingualism and the policy-oriented nature of EU rules, the latter has undergone significant transformations in the past two decades. Moreover, the imperative for harmonious interpretation by national courts has introduced complexities and frictions, compelling these courts to interpret their statutory laws in ways that might diverge from the rules of interpretation in their legal systems. Returning to the illustrative case of Bulgaria, Art. 46(1) from its Law on Normative Acts states that ‘[t]he provisions of the normative acts are applied according to their exact meaning, and if they are unclear, they are interpreted in the meaning that best corresponds to other provisions, the purpose of the interpreted act and the basic principles of the law of the Republic of Bulgaria.’ This presents a notable contrast between the national and EU approaches, with the former distinctly favouring linguistic interpretation as the primary consideration.
This brief overview sheds light on the myriad interpretative challenges faced by national and EU lawyers and judges grappling with the ever-expanding body of legislation. However, the issue at hand introduces an additional layer of complexity.
While the concept of multidisciplinary work groups and a multidisciplinary legislative process presents numerous advantages, a significant issue arises when these processes are implemented without a robust methodology. Beyond the discussions and brainstorming, the drafting of laws is an intricate endeavour ideally governed by strict rules to ensure consistency and, consequently, legal certainty. Unfortunately, contemporary EU law falls short of achieving this consistency goal. The ever-increasing length of the legal instruments does not make them more precise, but rather deteriorate from the classical rules of law making in the civil law tradition. In addition, parallel frameworks create complexity which undermines legal certainty in areas governed, simultaneously by many frameworks. For example, it’s not clear how the impact assessment under GDPR relates to the human rights impact assessment under the future Corporate Sustainability Due Diligence Directive. Furthermore, the excessive use of terminology which is either open for interpretation such as transparency, or multidisciplinary by nature, such as interpretability leads to increased risk for the courts, including the CJEU to provide inconsistent interpretation and even wrong interpretation.
Examining the AI Act proposal, which was our starting point, we find such risk existing with respect to the interpretation of the notion of bias. The issue arises when looking at the intersection of the differing interpretation methods at the EU and national levels and the multidisciplinary nature and language of the Regulation.
The Linguistic Approach
Whether it's a national court or the CJEU interpreting the term 'bias' in the forthcoming AI Act, the initial step should, by necessity, entail a linguistic interpretation. In the absence of a legal definition, standard practice dictates consulting authoritative dictionaries to discern the 'ordinary meaning' of the word. In this specific instance, a comprehensive examination of the term in its English-language context involves comparing the definitions from both the Cambridge dictionary and Merriam-Webster.
In the first case ‘bias,’ as a noun, is regarded as “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment” or alternatively “the fact of preferring a particular subject or thing.” The Merriam-Webster Dictionary’s first definition is “an inclination of temperament or outlook, especially: a personal and sometimes unreasoned judgment.” However, it also offers other options such as “deviation of the expected value of a statistical estimate from the quantity it estimates” or “systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.” The last two alternatives illustrate the extent to which a term can diverge from its everyday usage when examined through the perspectives of diverse scientific fields.
While the existence of multiple meanings for a term in legislation is not a novel issue (e.g., privacy as a fundamental right, privacy as data privacy, or even privacy as data protection), the inclusion of 'bias' in the AI Act poses a distinct challenge for two main reasons. Firstly, the concept could be interpreted through the lenses of at least five fields of study. Secondly, the application of the term in the AI Act Proposal is inconsistent and, upon teleological interpretation, reveals diverse meanings that may not be immediately evident to an individual not proficient in all five of these fields.
To illustrate the scale of the issue, Table 1 presents a side-by-side comparison of the definitions of bias in law, sociology, psychology, statistics, and neural network theory (as part of machine learning) and compares them to the use of the term in the AI Act Proposal, taking the initial version of the Commission as a main reference point.
Field | Definition of bias |
Law | Predisposition or prejudice that affects the impartiality or objectivity of legal decision-making. |
Psychology | Systematic pattern of deviation from norm or rationality in judgment/ tendency, inclination, or prejudice toward or against something or someone. |
Statistics | Systematic error in the estimation of a parameter. |
Sociology | Systematic errors or distortions in the way sociological research is conducted, interpreted, or presented. |
Neural Network Theory | Additional parameter (a constant) in the model that provides the network with the flexibility to fit the data better and improve the model's performance by shifting the activation function towards the positive or negative side. |
Table 1
Bias & Law
In a legal context, bias is often defined as a predisposition or prejudice that affects the impartiality or objectivity of legal decision-making. It can manifest at various stages of the legal process, exerting influence on the actions and judgments of those involved. Within this framework, three distinct types of legal bias emerge: judicial, prosecutorial, and jury bias. This division is based on procedural considerations and is more common in criminal law, particularly well explored within Common Law legal systems.
Furthermore, we can consider legislative bias as being a separate category, manifesting itself in the creation of legal instruments that disproportionately affect certain groups or individuals. This bias may be unintentional or reflective of systemic inequalities, potentially leading to structural bias when the legal system itself is structured in a way that systematically disadvantages certain groups or individuals.
The presence and impact of other forms of bias in the legal domain, such as confirmation bias and implicit bias, are closely linked to the realm of psychology. In understanding these different types of bias, law draws on definitions and mitigation mechanisms from psychology.
It is evident that the typical understanding of bias, inherent for lawyers reading the AI Act, primarily pertains to prejudice and predisposition as human conditions. However, the Regulation does not employ bias in this manner, except in the proposed amendment by the European Parliament European Parliament (EP) to Art 59, paragraph 4a which provides that “[e]ach national supervisory authority shall exercise their powers and carry out their duties independently, impartially and without bias.”
Bias & Psychology
In the realm of psychology, bias is defined as the systematic pattern of deviation from norm or rationality in judgment. It can manifest itself in various stages of the research process, influencing the way studies are designed, data is collected, and conclusions are drawn. Additionally, psychologists explore bias itself as an object of examination. Psychology Today offers a comprehensive definition, describing bias as "a tendency, inclination, or prejudice toward or against something or someone." Importantly, psychology recognises that biases can be observed as both positive and negative tendencies.
Within the AI Act Proposal, particularly in the amendments suggested by the EP, several provisions reference bias in alignment with the understanding and study of psychology. For example, in the provisions, concerning the requirement for human oversight, the EP proposes addition of point (ab) to Art 16, paragraph 1, creating obligation for the providers of high-risk AI systems to “ensure that natural persons to whom human oversight of high-risk AI systems is assigned are specifically made aware of the risk of automation or confirmation bias.” This interpretation sharply contrasts with other uses of the term ‘bias,’ as I am about to discuss bellow.
Bias & Sociology
In sociology, bias denotes systematic errors or distortions in the way sociological research is conducted, interpreted, or presented. These biases can stem from various sources and have a potential impact to the validity and reliability of sociological findings. The focus here is on biases occurring during the conduct of sociological research, distinguishing them from biases exhibited by participants in the research.
Understanding the meaning and various types of biases in sociological research holds particular importance in the context of the AI Act for two key reasons. First, given the pivotal role of data in training AI models (including data coming from sociological research), awareness of specific biases (such as sampling bias, confirmation bias, observer bias, cultural bias, etc.) is crucial, as these biases may manifest in the output. Second, with ongoing efforts to integrate social scientists into the design and development process of AI systems, methodologies must be harmonised, incorporating bias awareness and mitigation.
The use of the term ‘bias’ in sociology also bears significance for interpreting of the AI Act Proposal, aligning and complementing its meaning with that of bias in statistics.
Bias & Statistics
In statistics, bias refers to the systematic error in the estimation of a parameter. It occurs when there is a consistent tendency for the estimates to deviate from the true value in a particular direction. This can compromise the accuracy and validity of statistical analyses, potentially leading to incorrect conclusions or interpretations.
Although bias in statistics and sociology may initially appear similar, they are distinct concepts. Statistics addresses systematic errors introduced into the sampling or measurement process, leading to results that consistently deviate from true values. In sociology, bias refers to a systematic and often unconscious favouritism or prejudice that influences the research process or the interpretation of results. This means that in statistics bias is rather focused on errors and deviation from the true value, compared to the subjective perspectives that researchers or societal structures may bring into the research process, leading to systematic errors in sociology. The types of bias are also different, for example, sociological bias includes cultural bias and observer bias. Mitigation mechanisms also vary, with statistics employing rigorous sampling methods, careful measurement, and appropriate statistical techniques, while sociology relies on measures and training to increase reflexivity, awareness of potential biases, and efforts to minimize personal and cultural biases.
Focusing on the application of statistics for the purposes of the AI Act, the field is crucial for AI in several avenues: data analysis and interpretation, using descriptive statistics to assess the variability of data, using inferential statistics and probability theory to train models to make predictions, evaluating performance of a model through hypothesis testing, regression analysis, introduction of fairness metrics, etc.
Therefore, it is relatively safe to assume that in Art 10(2)(f) of the AI Act Proposal we should interpret the term ‘bias’ in the meaning it has in statistics. However, according to the EP’s proposal, the text of the provision should read that:
2. Training, validation and testing data sets shall be subject to data governance appropriate for the context of use as well as the intended purpose of the AI system. Those measures shall concern in particular,
…
(f) examination in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (‘feedback loops’) and appropriate measures to detect, prevent and mitigate possible biases;
(emphasis added to indicate the proposed changes by the EP)
It is evident that the proposed changes also change the meaning of the term ‘bias’. In that particular case, it moves away from statistics towards sociology and even its dictionary meaning, encompassing the societal impact awareness and prejudices, but at the same time retaining the technical characteristics of the term in statistics.
This convergence of meaning poses a significant threat to legal certainty, especially as we delve into the final study field covered in this piece - namely, neural networks.
Bias & Neural Network Theory
In the context of neural networks, bias has a different meaning compared to statistics. In neural networks, a bias term is an additional parameter (along with weights) in the model that provides the network with the flexibility to fit the data better and improve the model's performance by shifting the activation function towards the positive or negative side.
During the training of a neural network, both the weights and biases are adjusted to minimise the difference between the predicted output and the actual output (the loss). The optimisation algorithm, such as stochastic gradient descent, is used to update the weights and biases iteratively.
Bias terms provide flexibility to the model by allowing it to account for variations in the data that are not directly related to the input features. They help the model generalise well to different situations. For example, imagine you want your model to predict how much time it takes for a delivery person to reach different houses in a neighbourhood. You are using features like distance and the number of packages as inputs. Without bias, your model might assume that the time it takes for delivery is solely based on the distance and the number of packages. However, there could be additional factors affecting delivery time, like traffic conditions or the overall speed of the delivery person, which are not directly related to the distance or package count. With bias your model could consider a baseline delivery time that's not influenced by distance or package count. This will essentially mean that even if the distance is zero and there are no packages, there'd still be some time taken for the delivery. In other words, the neurons would be activated, and backpropagation would occur even if the input is zero.
In summary, in neural networks, bias is a learnable parameter that plays a crucial role in the activation of neurons and contributes to the overall flexibility of the model. It allows the model to capture complex relationships between input features and output predictions.
This shows that the meaning of bias in the context of neural networks demonstrates a significant difference compared to its other meanings which were previously discussed. In fact, it is a desired and necessary component of neural networks as a specific type of machine learning model. As a result, its usage falls in the scope of the AI Act. This may lead to incorrect interpretation by lawyers in court, regulators, or legislators, especially where the Regulation includes provisions on ‘bias’ in AI systems without specifying datasets, outputs, etc.
The Way Forward
This short piece is intended to trigger a meaningful discussion in two main directions. Firstly, we need to raise awareness and attention to details when introducing multidisciplinary terms in our legislation (either national or international) which could simultaneously relate to multiple fields. This brief analysis does not claim exhaustive examination of the possible definition of bias inside each of the explored fields, and also it does not discuss the nuanced meaning of bias in machine learning, except in the case of neural networks. The main reason for this is that ‘bias’ in machine learning encompasses elements of all the other definitions discussed above.
The second direction of the discussion concerns the need for implementation of proper interpretation and legislative drafting techniques, especially in the light of the merger between technical standards and legislation in the EU we have seen in the past decade. To put it simply, multidisciplinary in creating law sounds good on paper, and it might even be useful, but to be efficient, it requires a thoughtful process aligned with the rules of legislative drafting.
Applying these two discussions to the AI Act proposal, before seeing the results of the Trialogue, we need to highlight a few issues. As discussed in the previous sections regarding the concrete interpretation of the term ‘bias’ in the future Regulation, the meaning is not as apparent as it may seem. This could, in principle, be mitigated through a legal definition but it could prove to be unsuccessful when the real meaning of the term differs, depending on the provision. The quality of the legislative drafting in the AI Act is already poor. The long and confusing articles, combined with heavy language does not only make the provisions impossible to read for the general public, but also challenges legal professionals. This, in my opinion, is one of the biggest weaknesses of this piece of legislation, because it could compromise legal certainty. The CJEU itself, stipulates that the principle of legal certainty requires “that rules of law be clear, precise and predictable as regards their effects, in particular where they may have unfavourable consequences for individuals and undertakings.” The multidisciplinary of the regulated field could not and should not be a reason for such undesirable effect. After all, the way we understand law in the civil law tradition, it regulates the economic and social relationship between people and other recognised entities, contrary to the recently popular opinion that it regulates technologies themselves.