The lights have always been on inside the so-called AI black boxes. Shifting commercial expectations and increasing awareness of the technology are seemingly changing the narratives.
What if bias is actually fundamental to AI’s ability to work in the first place? Rules are imposed to, essentially, protect people, refine results and achieve specific goals but each rule can be seen as an introduction of bias. What needs to change is the implication that it is inherently bad.
“At the end of the day, you’re building a discerning machine and in order to discern, you need a filter and that, fundamentally, is bias,” says Alix Rübsaam, Vice President of Research, Expertise, Knowledge at Singularity.
It’s also important to understand how to use bias in the right ways and ensure AI delivers results that are relevant, which means prioritizing and understanding what data has been used to train the algorithm and why.
“It’s weird that we feel so defensive when it comes to our biases because these are what help us discern. Humans have biases because it is how our brains make sense of things. And if we pretend our AIs don’t have biases, we can’t optimize them or improve their ability to discern,” she says.
Loading...
One could say that an algorithm used to determine specific results must have this. If the AI rejects the CV of a six-year-old applying for the role of CEO, the bias is relevant.
As Rübsaam points out, AI solution builders need to move away from the knee-jerk response that there should be bias-free algorithms but instead own up to the ones they have introduced because it defines the accuracy of the algorithm.
This then turns the conversation towards the concept of the AI black box – the ‘darkness’ that sits within the algorithms. IBM describes it as “an AI system whose internal workings are a mystery to its users”.
“For the last decade, some of the loudest voices in the AI space have always said [this]; that they don’t know why it does what it does,” says Rübsaam.
“AI has been given the data and trained itself and is now making decisions based on motivations that they cannot discern. This is just not true because there are always ways to either go in after the fact, or to build models that tell you why an AI does what it does.”
It has been technically possible all along – expensive and time-consuming, but possible. As Rübsaam points out, isn’t that the very purpose of AI?
“When you hear a problem described as complex, time-consuming and expensive, that’s exactly the type of problems we use AI for. It has never made sense to me that the AI black box has been such a predominant narrative.”
In 2019, Rübsaam alongside Ty Henkaline, an AI and data science expert who worked with her at Singularity, developed a model that allowed users to train their own algorithms and extrapolate all the decision-making that needs to go into making an AI before it even comes into existence.
It sheds some light on how human biases distort AI. People don’t realize they’re making biased decisions until they sit down with their hands in the code or how they select specific datasets.
“This can go wrong – recently a study showed the efficiency of facial recognition algorithms which analyzed across four quadrants; the AI’s abilities to detect whether a face was present was wildly skewed towards white male faces. The dataset given to AI was not statistically representative.”
This use case underscores the importance of knowing what you’re training your AI on, and what the parameters of the dataset are. Now, transparency itself has become a commodity.
Over the past year, companies like Anthropic and Microsoft have turned the light on inside AI, reportedly finding clues to how large language models (LLMs) work for the ‘greater good’.
Mark Zuckerberg released Llama 2 which he described as an open-source model. DeepMind came out with an open weights model called RecurrentGemma based on the Griffin architecture, and then NVIDIA released a paper which indicated that they would open not just the weights, but also the training data.
In the space of about six months, notable players in the industry started to increase transparency regarding their models.
“It’s an interesting time because I think we might finally be in a place to say maybe AI was never the black box we said it was; we just didn’t take the steps to do the analysis to actually discern it,” Rübsaam explains.
“It’s key to showing the bias [and] the weights in decision-making, which means we can improve the quality of AI and move away from universally applicable AI models.
“We can start thinking about how a particular solution matches the problem we’re trying to solve,” she concludes.
Loading...