advait.world
Research

AI and Biases – an Inquiry into ‘Truth’

November 30, 2023

AI and Biases – an Inquiry into ‘Truth’

Bhrigu A. Pamidighantam and Arnab Bose

[1] Author details:

Bhrigu A. Pamidighantam is an Advocate practicing at the Delhi High Court and is an alumnus of Jindal Global Law School, Class of 2021 (main contributor)

Arnab Bose, PhD, is Senior Fellow, Center for the Study of United Nations, Jindal Global Law School, O.P. Jindal Global University, (edits and guidance)

“A thing is a thing, not what is said of that thing” – Riggan Thompson, Birdman

It is the year 2023 and roughly six-years following the James Damore v. Google Inc. incident – where Damore sued Google in a class-action lawsuit for firing him (and many others) for circulating an “unsavoury memo reinforcing gender stereotypes in Silicon Valley” – and ‘political correctness’ activists and ‘free speech’ absolutists seem more increasingly pitted against each other than ever before. Coincidentally, around the same time of the James Damore incident, Princeton University had published research proving that akin to its human creator – Artificial Intelligence (hereinafter abbreviated as “AI”) is just as susceptible to biases as humans; including, but not limited to, those sensitive variables attributable to gender, race and socio-historic inequities. Put another way, AI acts as a mirror. And sometimes we don’t like the face staring back at us.

At this intersection of ‘socio-technological’ issues, it is pertinent to note that the purposes of this project is not to determine whether the ‘biases’ are “positive” or “negative” but rather is a critical analysis of how best to eliminate or diminish these biases and devise an ideal ‘objective’ way forward of how to determine the ‘truth’.

Bias can creep into algorithms in several ways. In most cases, AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. Another source of bias is flawed data sampling, in which groups are over- or underrepresented in the training data. In recent past, these biases in the AI have translated into incidents of racial profiling, gender discrimination and linguistic biases among others. Tech companies, seeing this as “highly problematic”, endeavoured to eliminate these biases from creeping into the AI systems by altering the training data-sets to be more ‘racially inclusive’, ‘gender-neutral’ and ‘politically correct’ in general – their vision of an AI reflective of the most “just”, “inclusive” and “fairest” form of intelligence.

The Google Brain team gave the following justification:
“One perspective on bias in word embeddings is that it merely reflects bias in society, and therefore one should attempt to debias society rather than word embeddings. However, by reducing the bias in today’s computer systems (or at least not amplifying the bias), which is increasingly reliant on word embeddings, in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society.”

In other words, the Google Brain team is of the impression that if they feel that by creating AIs, chatbots, machine translators, inter alia, that genuinely believe nurses are as likely to be men as women, these algorithms will generate “unbiased” results that can then, in a small way, change society itself.

This is where things get deeply problematic. In order to move closer to the version of a utopian world as deemed by a select few, the AI is programmed in a manner that takes us further away from reality. What these developers fail to realise is that in trying to artificially eliminate a set of biases, another set of biases are inadvertently created. The resolute focus of creating a smarter and more efficient AI has somewhat shifted towards creating an ideologically purified AI deployed to fight cultural and ideological wars. This is the equivalent of a Government building a wall to hide the slums.

It is, therefore, that a brutally realistic and efficient AI is miles ahead of a ‘politically-correct’ programmed to deal with delusions of grandeur. In both cases, biases will continue to remain. What, however, may be done is to program the AI in a means whereby it scrutinises data as objectively as possible.

The jurisprudence on ‘objectivity’ – both in law and administration is of great concern to legal systems globally. The Western model (applied across most commonwealth nations) of ‘inquiry into truth’, in particular, was predominantly guided by its philosophy and epistemology – which was pivotal to the articulation and development of Evidentiary Law and its jurisprudence.

However, whilst Western philosophy and laws developed systems of ‘logic’ as an end in itself, the classical Indian schools of epistemology and its logic systems differed in relying upon ‘truth’ as the end-goal – reflective in its logic systems in the ‘Shastras’ or legal texts.

The elimination of biases to reach the ‘truth’, however, is not a problem unique to AI and this project. In fact, it is one that had plagued the law and justice system for centuries before they developed the principle of bonus paterfamilias or the test of a ‘reasonable man’. In law, the proof of a fact is the belief of the existence of such fact by a prudent person. The standard, therefore, is that of a reasonable man – any deviation of which may be classified as a bias. The challenge of the AI is, therefore, to determine the perception of a ‘reasonable man’.

Under classical Indian epistemology, the aforementioned science of an ‘inquiry into truth’ was developed in great detail in the Nyaya Shastra and the Vaisheshika Shastra authored by Gautama and Kanada respectively.

Yathartha •(True knowledge)

Figure1: The Inquiry into Truth

Source: self, rendition from Nyaya Shastra and the Vaisheshika Shastra

The power of discrimination or to distinguish between the correct and incorrect; right and wrong; and true and false was defined as Buddhi or ‘intelligence’ which was bifurcated into Smruti viz. ‘memory’ – that arises out of past impressions and Anubhava viz. ‘experience’. With regard to the project at hand, only Anubhava is of relevance vis-à-vis AI. Anubhava is further divided into ‘yathartha’ (true and accurate knowledge) and ‘a-yathartha’ (false and incorrect knowledge). ‘Yathartha’ – the ‘true’ knowledge is acquired through certain clear and distinct means classifiable as Pratyaksha, Anumana, Upamana, Shabda – each of which will be explained herewith with regard to their role in the detection of ‘truth’ vis-à-vis AI.

Pratyaksha is defined as the knowledge acquired through the sense organs viz. the visual, auditory, olfactory and tactile. Traditionally, the experiences felt by a human through the aforementioned sense organs was deemed as true knowledge. In terms of AI, scientific advancements have been made in terms of the visual viz. computer visions; auditory viz. text-to-speech and speech-to-text; olfactory through stimulating olfactory receptors (ORs); and tactile though haptic sensor and touch-based input devices. Anumana is defined as the inference drawn by logical deduction is found in AI through ‘unsupervised learning’ – a machine learning technique that helps in finding unknown patterns in data. Upamana is defined as the knowledge acquired through the comparison of like and unlike things. This knowledge is found in the ‘supervised learning’ whereby inputs are mapped to outputs based on certain examples or pairs. Among these four, the ‘knowledge’ learning mechanisms under Upamana remain most susceptible to human-biases creeping in. Shabda is a scientifically proven statement of an expert or trusted authority. Natural Language Processes in AI are best reflective of this.

The risk of deliberately altering AI is much worse than a brutally–honest AI, and in some sense this will also be in consonance with the ‘Three Laws of Asimov’. If science-fiction has taught us anything, it is that AI can play you for a fool, and distorting word embedding models is just one method through which AI can be subtly bent to serve the agenda of its creators. This is where the dire need for objectivity in AI plays such an important part.

For PDF, please write to arnab@jgu.in

    Leave a Reply