Women in Big Data - Podcast: Career, Big Data & Analytics Insights

1. Responsible Artificial Intelligence - A Talk With Virginia Dignum (Umeå University)

Help To Grow Talk Episode 1

We would love to hear your feedback on this episode! Please click here to share your thoughts via text message. We can't wait to hear what's on your mind!

Listen to how you can develop and use responsible AI, e.g., including the role of diversity & inclusion, with experts Virginia Dignum, Professor Responsible Artificial Intelligence at Umeå University, and Valerie Zapico, Managing Partner at Valkuren. Your podcast host is Desiree Timmermans, Director at Data Innovation Talk.


1. Topics

Responsible AI 

  • What is responsible AI, and why is it important, including the role of diversity & inclusion?


Responsible AI: Human- and Environmental Well-Being

  • What can we do to ensure the positive use of AI, meaning contributing to human- and environmental well-being? 


The Future of Responsible AI

  • How do you think the future of responsible AI will look like?


2. References


3. Links


4. Partners


Subscribe to the Women in Big Data Brussels Podcast by copying and pasting the following URL into the podcast app of your choice: https://feeds.buzzsprout.com/1985853.rss

Follow this Link and let Buzzsprout know that we send you: it gets you a $20 Amazon gift card if you sign up for a paid plan.

Podcast from Everywhere: SquadCast - The Remote Recording Studio

Support the show


Mentoring Program - Women in Big Data
Mentoring is essential to success at every stage of a women’s career, both as a mentee and mentor. The many WiBD mentoring programs are open to WiBD members and cover opportunities for junior, mid-career, and senior women in technology. Not yet a member? No worries. By joining a mentoring program, you automatically become a WiBD member. Both membership and mentoring are free of charge.


Website: Women in Big Data Podcast
LinkedIn: Follow - Women in Big Data
LinkedIn: Follow - Women in Big Data Brussels
Contact us: datawomen@protonmail.com

Note: Podcast transcription edited to improve readability.

Desiree Timmermans  00:03

Hello, welcome to the Women in Big Data Brussels Podcast where we talk about big data topics with diversity and inclusiveness in mind. We do this to inspire you and to connect, engage, grow and champion the success of women in big data. The aim of this podcast is to reveal to you what you can do with big data, how organizations and societies use it, and the potential of big data to create a better future for everyone.  

Desiree Timmermans  00:30

In this first episode, we kick off with Virginia Dignum, Professor Responsible Artificial Intelligence, and Valerie Zapico, Managing Partner at Valkuren. The topic is responsible AI. What is responsible AI, and why is it important? What is the role of diversity and inclusion in achieving responsible AI? What can you do to ensure the positive use of AI, meaning contributing to human and environmental wellbeing? And, we talk about the future of responsible AI: how will it look like? My name is Desiree Timmermans, Director at the Virtual Data Innovation Summit and your podcast host. Let's start. 

Desiree Timmermans  01:13

Welcome Virginia, Valerie to this podcast about responsible AI. And, maybe we can start with that you shortly introduce yourself. So, Virginia, if you can start.

Virginia Dignum  01:24

Hello, I am Virginia Dignum. I am a Professor of Responsible Artificial Intelligence at Umeå University in Sweden. I have also been involved in many international initiatives on developing, designing, and setting up guidelines and definitions for responsible AI. For instance, the European Commission, UNICEF, UNESCO, the World Economic Forum, and many other organizations.

Desiree Timmermans  01:51

Thank you very much. And Valerie, can you shortly introduce yourself?

Valerie Zapico  01:55

So, I am Valerie Zapico. I manage the Valkuren company, specializing in big data solution development with data analytics and AI. I am also the Brussels Leader of Women in Big Data and part of the AI4Belgium task force for AI learning / AI training.

Desiree Timmermans  02:18

Okay, thank you very much. It is good to have you. So let's go to the first question. What is responsible AI, and why is it important, Virginia?

Virginia Dignum  02:28

Wow, that is a big question. But a very important one, indeed. I think the main idea behind responsible AI is the realization that AI technology can do a lot for us: people and society in general. But it is a technology that doesn't come without risks. It is a technology that is new, of course, and the way we use it is heavily based on identifying patterns in data. And that means: the data that we are using to develop this, as a basis for the reasoning and the decisions that an AI system does, has a huge influence on what the AI system can or cannot do. So if the data is not complete, correct, old, or refers to a different region, and we are using that data to make decisions, of course, things do not go correctly. To only show AI-system information about dogs, we cannot expect the AI system to say something useful about cats. And so the idea is we can do a lot with AI, and increasingly there is an opportunity to do it. 

Virginia Dignum  03:45

Responsible AI starts with a question: should we use AI in this specific situation? And this question is fundamental. It also involves power issues, those involved in answering the question. Those that can answer 'yes, we are going to use AI to develop the system' are the ones who will ultimately direct where the system is going. What we see nowadays is that this question is not really being answered in a inclusive and generalized way. It is a question that is most times being answered by private companies, by those that have the money and the power to answer it. So, responsible AI is also about ensuring that those that are potentially affected by the systems can participate in the discussion can participating in the decision about are we going to use AI to provide better education? Are we going to use AI to determine who gets credit or not? Are we going to use AI to determine what movie you watch tonight on TV? So, all these types of questions are not only questions that affect those that are designing and deploying the systems but affect, fundamentally, the users of those systems. So we do need for responsible AI a much more inclusive perspective.

Desiree Timmermans  05:09

Indeed, I agree with you. And Valerie, is there something you would like to add from your experience?

Valerie Zapico  05:15

Maybe I would add just one thing: I think responsible AI is about informing people that they are using AI. Because sometimes people use a tool, app, or something else or receive input from a program, and they don't know that AI is running behind it.

Virginia Dignum  05:37

Definitely.

Desiree Timmermans  05:38

So, it is really important for responsible AI to start well: think it through, and not just start building because you can't go back. 

Virginia Dignum  05:53

And, responsible AI is not a kind of checklist you do after you develop everything, and then you cross: yes, I did this, I did that. Then you are definitely too late. It is an attitude. It is question zero, so before you do anything, you have to start thinking about: what exactly is the impact, how we are going to build this, what options do we have to build this system, and who is going to benefit or be impacted negatively by the system? And, indeed, make sure that people are aware that they are using AI systems and that those systems make decisions about our lives and our society.

Desiree Timmermans  06:33

So the mindset is very important, which means that it should also be included in education programmes. 

Virginia Dignum  06:40

Yes. 

Desiree Timmermans  06:40

And do you already see that happening, Virginia?

Virginia Dignum  06:43

Definitely. There is an increasing effort by computer science degrees worldwide and engineering programs to make developers, computer scientists, and data analysts aware of the potential impact of AI systems. I do believe that AI is no longer just an engineering discipline. It is a multidisciplinary field. And in the teams that develop AI, and I am sure that Valerie will agree, we cannot just have a team only formed by engineers and data analysts. We need to have other disciplines: people who know about the legal impact of the systems and people who know about societal ethics. And so it needs to be much more than just an engineering discipline. But the engineers need to be much more aware of what their impact is. So there are quite a lot of initiatives at the international and national level to really include, in an integrative way, the responsibility in the curricula for engineers.

 Desiree Timmermans  07:53

And what if I am in a startup and don't have that many resources but still want to develop a responsible AI system? What is your advice? How can I reach that if I am a startup? Maybe Valerie?

Valerie Zapico  08:08

I would say it is interesting to know the objective, to identify clearly the objective of what you would like to do. And then ask some questions about the data, environment, different parameters, and interactions. So, I agree with Virginia: it is not only about engineers. It is really a complete team that should address different points.

Virginia Dignum  08:39

And for the startup that maybe doesn't have access to a wide team of disciplines, I would say: start by talking to your potential clients, the ones you want to develop the systems for. Talk with them, see what their concerns are, what their needs are, and what their possibilities are. And also, there is some support freely available to everybody. For instance, the High-Level Expert Group of the European Commission developed this assessment list for trustworthy AI, which is available to anyone. With UNICEF, we developed a design canvas to help people design systems to get an idea of the questions they should be asking themselves. So there is information available that can benefit startups and small companies, and it doesn't really cost anything or doesn't really need to have extra people involved. But mostly, I would say, talk to your stakeholders: talk to those for whom you are developing systems.

Desiree Timmermans  09:37

Virginia, I understood there is now a follow-up of the High-Level Expert Group. There is a new initiative from the European Commission. Is that correct?

Virginia Dignum  09:44

Yes, there are many initiatives from the European Commission, not necessarily as a new expert group. But at this moment, I think the most impactful initiative at the European Commission is this regulation around the so-called AI act. It is at this moment a proposal from the European Commission which is being discussed, as we speak, at the European Parliament. The idea is that the European Parliament will provide the final decision sometime this year. And then, the member states will need to work on the implementation. You can see it a bit as the same process taken for GDPR some years ago. So, it was first discussed by European Parliament, and then the Member States needed to commit and implement it in their legislation and regulation. It is a process of several years. But it is a process that is important for those interested in the impact on the responsible use and development of AI. It is important to follow this process. It is also a participatory process; several times during the process, the European Commission has asked for input from anyone interested in providing input. And there have been changes and amendments to the original proposal. It is actually based on this input from researchers, companies, policymakers, and the public in general.

Desiree Timmermans  11:08

And do you also know an example? For instance, where people can go to, and that is a good practice? So, an AI case that went well, from the public sector or the business?

Virginia Dignum  11:19

There are several examples. Of course, there are many cases in which AI has been very successfully and responsibly used. I am not really aware if there are any repositories. I would say that a place to look for would be in the observatory by the OECD. The OECD keeps track of many different activities. I am not aware if they have a repository of best practices, but I imagine that would be the place to find it.

Desiree Timmermans  11:48

And Valerie, do you already have a best practice or an advice for our audience?

Valerie Zapico  11:54

I think the best practice would be to follow the assessment list. That is a really good point to start. I think that is the main step. And follow it during the whole project. 

Desiree Timmermans  12:06

Okay, I understand.

Valerie Zapico  12:08

It is not just at the beginning. During the entire project, and then also during the maintenance of the AI. You cannot push only the AI in production, but you have to maintain it and be sure that it is always responsible.

Virginia Dignum  12:23

And maybe for the audience and those with the smaller companies: there is also an abbreviated assessment list of the ALTAI assessment list from the European Union (a quite extensive list with 200 questions). But this abbreviated one gives you a feeling in a very short time. I think there are 15 questions. It gives you a feeling about the issues you can think about in the governance of data, algorithms, and the technology we are using, e.g., chatbots, robots? Those kinds of things, but also about potential societal impact. It already gives you a feeling of what issues you need to pay special attention to during the rest of the development. That abbreviated list is also available online from European sites.

Desiree Timmermans  13:17

So if I understand well, responsible AI is something that is a matter and responsibility for all of us, for everyone who is involved. It is not only the developers but the whole team around it. And also the users, as Valerie said. And I think: it is not that once you have the system available and it is up and running that you can stop because you have to monitor it and see where you can improve it.

Virginia Dignum  13:41

Yes, definitely. Very well summarized.

Desiree Timmermans  13:50

Now that we know what responsible AI is, why it is important, and what the role is of diversity and inclusion, I want to go to another topic: what can we do to ensure the positive use of AI, meaning contributing to human and environmental well-being? Virginia, do you have some examples of this?

Virginia Dignum  14:09

Yes, I can give you some examples. For instance, there is quite some work done on applying AI to support tracking deforestation, migration, and poaching in national parks in Africa. Those are very direct contributions to the sustainable development goals. So it really tries to help understand how climate and diversity, biological diversity, is changing. There are, of course, also a lot of applications of AI in medicine and healthcare. So supporting doctors with better diagnostics, support patients and citizens, in general, to help them, especially if there are difficulties accessing a doctor because they are too far or something like that, to help them maintain a healthy diet. There are examples of using AI for education, and inclusion in education, especially for children with special needs. That is quite some work, using AI to help them participate, in as inclusive as possible ways, in education. So there is a lot good going around the world.

Virginia Dignum  15:20

I also see a lot of interest from all kinds of organizations, small and big organizations, to really do their best to use AI in a responsible way. I don't think that most organizations want willfully to do any irresponsible use of AI or unethical use of AI. Of course, things go wrong. And we all know when things go wrong. But no one starts with a business model: let's go and develop irresponsible AI. 

Virginia Dignum  15:51

And maybe one more thing to talk about is the potential impact that AI is also on the environment and sustainability goals. AI, like we all know, can be quite heavy in computational power. So you do need to have huge computational servers and data centers to maintain all the data that is needed, that maintain the computer power to make the calculations. And I think there is also a responsibility for the data analysts and the computer science researchers to really start thinking about the cost of AI. Often we are driven by the development of AI to make systems more and more accurate, more and more exact. But at the moment, we really have to think: if I am going to improve 1% in accuracy, but I am going to use 50% more computational power, is this still sustainable, is this responsible? So we also need to invest in new technology that gives as much accuracy but really takes into account other metrics. We don't have to measure everything just by accuracy, which is a kind ingrained in engineers. But that is also part maybe of the education and the awareness that there are other values that we have to measure.

Valerie Zapico  17:11

Totally agree with this point. That is really a good point. We should pay more attention to this.

Desiree Timmermans  17:18

So bottom line, it is really important that when we develop a system, our starting point is AI4Good and AI4All.

Virginia Dignum  17:25

Yes, yes.

Desiree Timmermans  17:33

So Virginia, what do you think about the future of responsible AI: where are we going? 

Virginia Dignum  17:39

We are going towards more and more responsible AI. There is no future for irresponsible AI. So the only future we have is a future in which AI is developed and used in a responsible way. In a way that takes inclusion and participation, and diversity seriously. That considers the benefits for society, the environment, and humanity, in its broadest perspective. And in which economic factors go hand in hand with the need to align the use of AI with human rights, democratic principles, and environmental and societal requests. So I think there is no other way forward than the way we go, and like I say, we see already a lot of work done in this area. 

Virginia Dignum  18:30

I think it is important maybe to refer that there is still very much this idea that it is great to think about responsibility and ethics. But all those kinds of things stand in the way of innovation. And again, and again, I talk with the organizations, so they say: yeah, that is all great, but at the end of the day, we need to be innovative. And I think that, in a sense, this is a kind of the wrong way to address it. Innovation is about really moving technology forward, moving the use and development of technology forward. If we take responsibility as the beacon - the direction in which to o - then we can just use ethics and responsibility as a stepping stone for innovation. It is not a block in the way of innovation, but it is the stepping stone. Like we just said, we want to develop systems that are both accurate and energy efficient. So the possibility is giving us the direction for innovation. And, I think this message needs to be much more understood across companies but also governments because the idea that we are standing in the way of innovation is really wrong. Responsibility, ethics, and ethical principles and values are really a stepping stone for innovation.

Desiree Timmermans  19:47

So being effective and efficient also means being responsible. Valerie, something that you would like to add about the future for responsible AI?

Valerie Zapico  19:56

About the future of responsible AI, the point followed by Europe is totally different than in China or the US.

Virginia Dignum  20:07

Yes. There is, of course, something called the Brussels effect: the efforts taken in Europe, in the end, after some time, are also followed up by other countries. We see that, for instance, with data protection and all the efforts around data protection. It is now a global understanding that there is a need for data protection. So not every country is following GDPR, but most countries have some regulations and some efforts around data protection. And I believe that on responsibility and trustworthy AI, we will see the same moving as, like I say, this Brussels effect. But at the same time, you already see that most countries in the world, for instance, unite: they signed the UNESCO recommendations around the ethical use of AI. So, even China, the United States, and all the other countries are aware of the responsibility they have as huge developers and users of AI systems. And they are aware that things need to be done in a responsible way. Maybe, the way they understand responsibility is different from here in Europe, but the dialogues are there. At the level of UNESCO, there was a very good discussion in the development of the recommendations. There is work done at the OECD level and, nowadays, at the government level,  united in the GPAI: the Global Partnership in AI. So, there is awareness in other countries, maybe they do it differently, but I think the dialogue is starting.

Desiree Timmermans  21:36

And as you said, it is not a goal to go towards irresponsible AI but towards responsible AI. And, it is a kind of learning process.

Virginia Dignum  21:45

Yes. And education, awareness, and participation really make a huge difference. 

Desiree Timmermans  21:52

I think that is a positive message to conclude this podcast. So I can only say: thank you very much for sharing your knowledge and your expertise, Valerie, Virginia. And I hope maybe within 6 months, or 12 months, we can do this again and see where we are going.

Virginia Dignum  22:07

Thank you very much for inviting us. It was very interesting. 

Valerie Zapico  22:10

Great.

Desiree Timmermans  22:11

Thank you very much. 

Valerie Zapico  22:12

Thank you.

Desiree Timmermans  22:16

Thanks for listening to the Women in Big Data Brussels podcast. Next time our guest is Tina Rosario. She is the Chief Data Officer at SAP, and the Executive Board Member and European Regional Director for Women in Big Data. If you want to get in touch, contact us via email at datawomen@protonmail.com. Join us next time!

People on this episode