Seatbelts for Cars, Audits for Algorithms

 Dr. Gemma Galdón-Clavell and her team at Eticas are mitigating the impact of biased and discriminatory algorithms on consumers through innovative audits.  

Dr. Gemma Galdón-Clavell, CEO and Founder of Eticas

Barcelona, Spain


Dr. Galdón-Clavell, Ashoka Fellow and CEO of Eticas in Spain, became fascinated by the rise of technology in everyday life when she was a PhD student studying urban sociology back in 2010. She was particularly concerned with surveillance technologies targeting public spaces. As Dr. Galdón-Clavell focused her research on the use of cameras, sensors, and CCTVs for public security measures, she realized the significant harm these technologies could cause society if they were misused. Dr. Galdón-Clavell decided to devote her career to safeguarding society from harmful or exploitative technology. As the world rapidly digitalized, Dr. Galdón-Clavell’s field of work exploded. With expertise at the nexus of sociology and technology, she worked as the principal investigator for the European Commission implementing projects including “smart borders,” and served as the coordinator of the Barcelona office of the United Nations Institute for Training and Research.     

Today, Dr. Gemma Galdón-Clavell is a well-known leader in the field of technology ethics and even pioneered a new sector focused on a process that plays an increasingly consequential, but easily overlooked role in our lives: the algorithm.   

Algorithms are step-by-step procedures that produce an end result. Just like in a recipe, ingredients are combined according to the instructions to achieve the desired result: a meal.

Gemma Galdon-Clavell, Woman Social Entrepreneur, Change maker, Technology
Dr. Galdón-Clavell is a leader in the field of technology ethics and has pioneered a solution for biased and discriminatory algorithms. Photo by Eticas.

In technology, the ingredients are data, the instructions are algorithms, and the meal is a decision. But unlike a meal, algorithms recognize patterns to base future decisions. For example, Discover Weekly playlists on Spotify use algorithms to recognize the patterns of music each user prefers to then make a decision in the form of a personalized playlist based on this data.  

But as the world continues to digitize, algorithms are employed to make more and more far-reaching decisions, oftentimes using inaccurate, biased, or incomplete data. For example, police departments use predictive software to more efficiently identify crime, but because of bias and incomplete data, an algorithm that was supposedly “race-neutral” targeted black neighborhoods at twice the rate of white neighborhoods. In another example, the Department for Work and Pensions (DWP) in the United Kingdom, the body responsible for providing disability benefits, used corrupt and biased data that led to thousands of disabled citizens losing coverage and £100 million spent on appeals. 

Because algorithms operate by collecting data and recognizing patterns on which future programming decisions will be based, it is essential to understand who and what show up more commonly, and the integrity of the data. For example, having fewer women or people of color in a database means algorithms based on those databases categorize these populations as “less likely” or “riskier.” 

Consequently, algorithms do not only mirror perceived realities but can also amplify them. In a deeply unequal world, the consequences can be severe. For example, Dr. Galdón-Clavell explained that if a woman applies for a mortgage, she is understood by the training data as a riskier client because women historically have less access to banking systems and therefore less representation and history as loan recipients in the database. 

 As long as there are fewer women in banking databases, they will always be understood as riskier clients and could be subsequently denied a loan—even though women statistically have a better record than men of paying back loans. Such biased risk calculations often disproportionately affect women and minority groups. Dr. Galdón-Clavellwarned, “Engineers are designing systems that make life-changing decisions without understanding the breadth of their social impact.” She continued, “The layers of discrimination we have found are amazingly massive.”   

In response to this issue, Dr. Galdón-Clavell established Eticas to engineer a solution: algorithmic audits. Audits are easy-to-understand reports that assess how a given algorithm operates and makes decisions. Outlining the algorithm’s decision-making criteria is an effective way to assess whether algorithms incorporate bias when they function. Now that Dr. Galdón-Clavell has a solution to the problem of biased algorithms, she is changing the mindset in tech that regulation limits innovation and increasing public demand for auditing at every level. 

Establishing Audits as the Standard

Spain and most other European countries have robust privacy and anti-discrimination laws which, in theory, should protect individuals from discrimination by artificial intelligence. However, Dr. Galdón-Clavell explained that enforcement and implementation are lacking, and many tech companies avoid complying until court processes force them to do so. Courts move slowly, leaving open a window of opportunity for unchecked, unaudited, and biased algorithms to harm minority communities. Additionally, Dr. Galdón-Clavell noted that the status quo in tech rejects audits and other safety measures due to the idea that these processes stifle innovation.  

Dr. Galdón-Clavell said, “We have always monitored technology. Everything in a supermarket, in a shop, or at a pharmacy has gone through a process to ensure that whatever hits the market is safe for consumption except for [digital] technologies. For some reason, [the tech industry] has convinced us that anything we try to do, such as overseeing or asking questions, is a form of limiting innovation.” 

Partial example of one of Eticas’ auditing codes.

The Eticas team is changing the dominant attitude from apprehension to excitement and curiosity by conducting one audit at a time to prove that when companies, institutions, and governments go through these auditing measures, they are ensuring that their technology is avoiding biased or incomplete data that would harm both society and the organizations themselves.  

Thus far, Dr. Galdón-Clavell and her team have audited ten algorithms for governments, banking systems, think tanks, and healthcare companies affecting tens of thousands of people in the United States, Europe, and Latin America. For example, Eticas audited a Pennsylvanian county’s system for predicting the risk of homelessness to ensure that their technology did not use biased data. Eticas also audited health care companies’ mobile applications and hospitals’ triage and intake systems to make sure that such consequential decisions were made with complete and quality datasets. Eticas is currently auditing systems for the Chilean government as well. 

Additionally, Eticas created the first searchable database of algorithms used by governments and companies across the world called the Observatory of Algorithms with Social Impact (OASI).

OASI highlights these algorithms’ worldwide social impact on fields such as healthcare, education, policing, and more. Eticas reports that OASI has already become the major repository of algorithms with social impact. 

Dr.Galdón-Clavell offered the following metaphor: “We are trying to be the seatbelt of the Artificial Intelligence (AI) world.” She stated, “We will soon be looking back at 2020 and thinking, ‘Oh, my God, we had all these algorithms, and we didn’t audit them. That was crazy.’” These audits as precautionary measures to assess impact are not synonymous with impeding progress; rather, they operate to protect consumers’ safety and rights. 

A New Way of ‘Doing Tech’

Dr. Galdón-Clavell and her team at Eticas are not only focused on promoting the solution of algorithmic audits but also wielding the power of technology as a force for good by promoting transparency and accountability instead of profit and violation. Eticas has partnerships with labor unions, schools, human rights organizations, and think tanks to better understand how technology will impact the future of work, children’s education, migration and borders, and data and politics. Eticas reported that they have already transformed the labor relations in Spain in many fields including delivery platforms by supplying and enforcing the use of tools for transparent algorithms.  

 Dr. Galdón-Clavell is influencing the entire system through her advisory roles, including consulting lawmakers on how to create a National Agency of Supervision of Artificial Intelligence in Spain, serving as a member of the EIT Health advisory board, and advising the European Commission as a senior expert. 

Eticas also transformed its Barcelona office into a Civil Tech Lab that aims to promote practical solutions to technology challenges and other initiatives fostering new ways of “doing tech.” 

In the meantime, Dr. Galdón-Clavell is building bridges between the tech world and ethics—two highly related but often distant spaces—to ensure that the relationship between people and technology is symbiotic, not exploitative. 

Eticas also wants to hear from the public, the people who are affected by the misuse of technology. When has “bad” data, whether inaccurate or biased, impacted your life? Comment on the article and send your story to Eticas for the potential to get a cash reward while contributing to solutions to fight bad data.  

By Audrey Lodes

Leave a Reply