This is What AI “Looks Like”: Intuitive Visuals to Develop More Trustworthy Technology

Zetane’s presentation on new visual strategies to audit, regulate and understand the use of machine learning models in industrial operations.

Jason Behrmann, PhD
Zetane

--

Last autumn we posted about our presentation at the Montreal Connect conference. Here we post the subsequent recording of our presentation that happened on October 12, 2021 (the conference recording is available here). Our Director of Marketing and Communications, Dr Jason Behrmann, had the opportunity to discuss novel visual strategies to audit, regulate and understand the use of machine learning models in industrial operations.

If you prefer to read, below you will find a transcript of the presentation.

Video transcript

Dave Kaufman

And good afternoon, our afternoon session here at Printemps Numérique. I’m Dave Kaufman and I’m really excited to introduce our next guest. Here’s Jason Behrmann. He’s with us to talk about AI and this is what AI “looks like”: intuitive visuals to develop more trustworthy technology. Tell you a little bit about Jason.

Dr Behrmann is a marketing and communications specialist in Montreal’s artificial intelligence sector. He currently occupies a director role at Zetane Systems. He’s led the marketing initiatives in AI start-ups in industries ranging from healthcare to agro (agriculture). As a longtime activist in the technology sector, he was the VP of the nonprofit QueerTech, and a radio commentator and podcaster that informs the public about the influence of technology on sex and relationships. Jason has completed his doctorate at U of M (University of Montreal), and post-doctorate at McGill, where his research focused on assessing the social and ethical implications of new technologies in healthcare. Without any further ado, Jason Behrmann.

Jason Behrmann

Thanks for that great introduction, Dave.

So let’s go into a little preamble here of how the community of AI developers are trying to make the technology far more visual and what kind of impact visual assessments can have on us being able to better regulate and develop trustworthy and safe AI technology. So we’ve witnessed just recently the introduction of AI into a whole slew of consumer goods with much fanfare that has ranged from like, you know, optimized and tailored search algorithms all the way to recommendation systems in retail that know exactly what type of T-shirt you want to buy to the type of video that you want to watch. So now, we’re really upping our game today, where we want to incorporate AI into ever more complex and critical systems such as our power infrastructure to our judicial system. I mean, that’s great, because this opens up new opportunities for automation and ways for us to reap better efficiencies.

But with that being said, there’s a lot of people that are saying, “is that a good idea?” Especially given how novel this technology is, can we really trust it when we embed it into these critical systems? And you know, such trepidations, they’re not just, you know, a lot of hot air, because there’s multiple cases in the past where we have introduced AI technology into critical systems, or very complex ones, where we tried to scale them from a very controlled environment into in the laboratory to the real world and it kind of was a flop. And sometimes this actually did result in harms to specific people.

So going forward, we’re asking ourselves some questions: It’s like, well, what’s going on? And what’s the root of this? And one of the main problems that we have today is that the technology is really, really complex. Like these — this new era of artificial intelligence known as deep learning, the neural networks or algorithms that we use to develop the models that produce the predictions are really complex. And they require highly specialized knowledge and training in order to understand what the heck is actually going on within these new technologies.

And this has given rise to something that we’d like to describe as the “black-box problem” in AI, where the algorithms, their internal workings, remain quite mysterious, even to world leaders in the field. So sure, we can put in inputs in the form of data, it gets processed through the algorithm and the model, and it gives us a highly accurate prediction. But what — how it actually establishes that output, or that prediction, remains rather nebulous.

So that’s not so cool if you want to incorporate AI into heavily regulated fields, such as healthcare or aviation. So right now, we’re really focused on developing better ways to regulate complex AI technology. And we’re trying to establish, you know, certain standards and as to what kind of tests we need to do in order to ensure a reasonable level of trustworthiness and safety within AI technologies. So that’s great; like, what are some of the strategies that we’re trying to advance currently? There’s quite a few.

And right now, I just want to emphasize that this is kind of a big deal in government regulations in general. And for the first time in 2020, we starting to see the Government of Canada start to mention specific regulations for AI technology. So for example, in the Digital Charter Implementation Act, they made an amendment where we as consumers and everyday people can demand that if a business is using an AI technology on us that we should know that, that should be transparent. And also we should have the right to say to a business or a government entity, that if we have an AI system work on us that produces a prediction or a recommendation or a decision on our behalf, we can actually ask them to explain how that algorithm or AI technology came to that conclusion on our behalf.

Well, with that being said, in this new era of regulation, we’re entering something called the “Explainable AI era” or xAI, where all kinds of stakeholders ranging from the developers of the technology to the consumers that use it to the government that are supposed to oversee it, are demanding that we produce more transparent, interpretable and explainable AI technologies. So what we need to develop right now is a way for diverse stakeholders — whether it be like general members of the public, to experts in the field, to government regulators — to better understand the internal workings of these algorithms and open up opportunities for us to inspect them, and therefore gain new ways to regulate or assess its safety and trustworthiness. So one way we’re doing this is to do visual assessments: like, can we make the highly complex systems something that’s far more intuitive that we can inspect with a human eye? And just as a disclaimer, this is what we do at Zetane Systems.

So this kind of transition to something more visual, it’s something that we see again, and again, and technology, especially when you want to scale it and introduce it into more consumer-based kind of goods. So let’s take the example of the home computer. When it first came out, the home computer, like, the operating system was a very abstract and not very intuitive DOS. And home computers really, really took off with the public once we were able to transfer from that abstract and non-intuitive system to a visual, intuitive, clickable interface that we know today. So this is kind of what we want to do with artificial intelligence.

So can we take something that looks this abstract and this convoluted and turn it into something that’s more intuitive. So let’s look at examples of what we’re trying to do in the field today.

So here, what we’re looking at is a visual representation of a common object detection model. So what you’re looking at right now is visual inputs in the form of common consumer goods, namely, retail goods, such as shirts, or pants, or shoes, getting processed through an object detection model. And what you see with the internal elements in the model is how it is actually picking apart the image to figure out what the heck it is actually looking at.

So I just want a still image here. So, to go over that once more, you have an image input in the form of, like, a pair of pants, and the model starts to pick it apart and break it down into different kinds of features that you could see with the different kinds of coloured elements to see how it is figuring out that it is actually looking at a pair of pants. Well, okay, that’s great. That’s a lot of like fancy images that we see here. So, like, how do we transform this into conducting safety tests or better regulating the technology? Well, I’m going to give you an example here, which is one in medical image.

So we’ve seen examples where we’ve introduced AI technology into medicine, where we can automate diagnosis of certain types of scans in radiology such as mammograms. So here’s an example of a project that was done by our postdoc at Zetane named Dr. Amiri. And what we can look at here is a model called UNet that is ingesting images of lung CT-scans. And we want to train the model to look at different elements within lung tissue to help us with making a diagnosis of lung disease. So let’s look at that in greater detail.

Now, when we started looking at some of these visual elements, we noticed that there was a problem with the original model that we trained. And what was that? Well, we notice that up here, and many medical images, there can sometimes be elements that we add to it in order to protect patient information, such as, like, their name and their diagnosis. So here’s an example where a white label was put at the top corner of a lung scan. And when we inspected the internal layers of the AI technology, we noticed that it was putting a lot of emphasis on this top left corner in terms of making its diagnosis prognostications. Now, that should make you scratch your head a little bit, because what the heck does the label in the top left corner have anything to deal with lung disease?

So this is like one example. But problems like this and medical data sets are actually quite common. And we’ve seen them in the literature multiple times. And so we’re asking ourselves is, can we like identify these problematic elements early on? And then after doing an intervention, namely removing them, can we visually improve the AI system and improve it in a way that we could say like, “oh, yeah, I see how we’ve made a significant improvement; now I have more trustworthiness in this technology.”

So yes, here we go. Here’s an example where we see that the irrelevant information in the image is causing a distraction in the algorithm. And then after once we remove it, you could see that we could redirect the attention of the algorithm from the irrelevant components of the image towards more relevant elements, namely, the internal organs in the chest cavity.

So that’s cool. So let’s look at another example here. This is another object detection computer vision project. We were developing this for autonomous trains and what we’re using here is a common explainable AI or “xAI” tool called Grad-CAM. And what you see here is heat maps. And what it does is, it shows you where the algorithm is placing a lot of its attention in order to determine what it is actually looking at.

So here is a close-up still image. And yes, we could see that our algorithm was successful in identifying that an object on the train tracks for the autonomous train, namely the boulder, is a problematic element. But on closer inspection, we see that it is also recognizing the tunnel opening, which the algorithm just recognizes as, like, black pixels as, like, a solid object; so, not understanding that the tunnel is actually something for the train to go through. So with a visual assessment like this, we understand that we really need to improve our technology by informing the algorithm that this is actually not a solid object similar to this, and it’s actually a tunnel.

Okay, so this, what I gave you here is a very, very brief snippet of, like, what’s happening in the field. This is a booming area of development, where we’re really, really pushing forward to make AI more intuitive and more visual. And you could see many, many other examples in the academic research today, where there’s now like reviews-of-reviews, or surveys-of-surveys of the large body of new findings that have come about within, like, the past five years.

And another area that is quite interesting to note is that some of these explainable AI features that I showed you like the last one, Grad-CAM, with the boulder on the train tracks, a lot of these are, like, freely available online. And they’re open source, which means that a lot of people within the AI field are all collaborating today to develop these tools and put them out on the market to make them even more intuitive. And therefore the innovation level is quite fast with these tools. So that gives us a lot of hope going forward.

Well, that’s a lot of hope. But like the field right now is really, really limited. So what I gave you examples today, like, they focus on computer vision, which makes sense, because, you know, our computer vision data sets are typically in the form of images and videos, which are very visual. So by better assessing like how the algorithms are interpreting these visual elements, well, we could do that also with our eyes to get a pretty good idea. But that’s, like, only one sector of AI.

What about all the other AI technologies that focus on other kinds of classes of data, such as audio-based data, or natural language processing data that’s quite often in the form of, like, text? So there, we’re still developing, like, the tools and also the strategies.

And also what I showed you today with, like, inspecting, like, the internal layers of a model and those feature maps — what I showed you was, like, the easiest, most overt issues that you could identify just with the human eye by doing an inspection. I assure you that there are a lot more complex issues that can happen within a complex model that are way less easy for you to identify just with your naked eye.

So that also is just, like, part of the whole discussion of what I explained to you here today, which was focusing on, like, the neural network, or the algorithm and the final trained model and its outputs.

There is a new discussion now where people are saying, “we put so much emphasis on studying the model and its outputs,” that maybe we need to put more emphasis towards what goes into the whole development of the AI technology, namely the data.

And so we’re entering what is known as a data-centric approach to the development of AI that has been promoted by many leaders in the field, such as Andrew Ng.

And so with that being said, is like, what? Well, today, a lot of, like, our efforts in terms of visualization and explanation has focused on this latter half, which is has been focused on this, like, black-box algorithm problem. But many people are saying that, “Well, maybe if we put more emphasis on ensuring that the data is of higher quality, it is properly labelled, that you have a sufficient data set that has enough entries in it for training, and also if you have defined ways to assure that the data is actually more representative of the real world; well, if you have all that high-quality data that goes into the development of AI technology, a more trustworthy, safer AI technology should be like the retombées of such efforts. And therefore we can maybe put less emphasis on inspecting the internal workings of the technology.

So all in all, we’re in an exciting era right now. Like yes, AI is opening up a whole bunch of new opportunities. And yes, we really, really want to introduce it into ever more complex systems. But we’re really just at the beginning of understanding its fundamental workings. And also, like, how best to implement it so that it’s, like, the safest technology for society in general.

So even though we’ve made, like, a lot of advancements in terms of developing visual representations of AI, they’re still really, really technical. And if you are not, like, an AI expert, it’s not exactly the most of approachable form for you to do fundamental assessments on safety, and quality, and trustworthiness. We’re getting there. So we’re still, like, learning how to adapt these kinds of visual interpretations in a way that is best suited for a more general audience, industry specialists, and also for government regulators.

And we’re really just at the beginning right now of developing standard practices so that you can make like a side-by-side comparison, for example of, like, a visual representation of an AI technology developed by “Company A”, versus an AI technology made by “Company B”. So that’s some of the developments we need to look for going forward.

So, that’s it. So, thank you so much. If you have any other questions, please feel free to contact me at Jason at Zetane dot com or reach out to me on LinkedIn or on Twitter.

Dave Kaufman

Jason, thank you so much. I put it in the chat. If anybody has a question. We have a couple of minutes to get to it. Now, we’ll just wait a few seconds and see if anybody pops a question into the chat. But that was really interesting, Jason. I thought that the lung disease — the AI and showing the potential pratfalls show us that we’re like you said, we’re not quite there yet. And that, that says to me that I would still feel a lot more comfortable with a human reading my results than then to trust it to AI but from what you said, it’s a matter of months or years, right? I mean, this is so close to being achievable.

Jason Behrmann

Um, one of the best applications of AI to date has been in the healthcare system in the field of Radiology. And it’s been proven that AI assessments for the diagnosis of breast cancer, for example, now with, like, the most cutting-edge versions of this technology are better than a radiologist. So with that being said, we can show, like, concrete research that AI technology can be better than some clinicians in diagnosing you with some, like, pretty scary diseases out there. And we can introduce a whole bunch of new efficiencies and automation into healthcare. But what I really thought was interesting was that even though this is well known, you still said, like, “you know, I, as a person don’t feel like it’s very trustworthy,” right? And we still need to get over that hurdle where, like, the fact that a computer — even though it could be better than a human being — we still don’t trust it. And that’s why I keep emphasizing this word, not necessarily ‘safety,’ but the trustworthiness element to it. That’s why we need these more approachable ways to represent the technology. So even you as a patient: like, a clinician can go, like, “hey, the AI diagnosed you with this; I looked at what kind of diagnosis it gave; I concur. Sit down with me for like a second and I’m going to show you, like, how it actually processed your lung scan to give you this kind of diagnosis. And I can actually show you something that’s, like, visual where you even as like a non-expert could kind of get an idea as to, like, what the heck’s going on.” And we’d really like to promote that strategy. Because we feel that like that visual element to human beings, it’s most intuitive and it gives people a lot more confidence.

Dave Kaufman

Yeah, I think I would feel a lot more confident in that case. And also, if the doctor were to say to me, “when I diagnose I have an accuracy rate of 92%. But this algorithm has a an accuracy rate of 98%. So we’re going to put both of our heads together, and we’re going to show you what we see.”

Jason, thank you. That was really, really interesting. I don’t believe there are any questions from the audience. So we will end it here. Thank you so much, and thank you, everybody.

More recent presentations form Zetane

--

--

Jason Behrmann, PhD
Zetane
Editor for

Marketing, communications and ethics specialist in AI & technology. SexTech commentator and radio personality on Passion CJAD800. Serious green thumb and chef.