Corina Enache, co-host of The Human Show, our media partner, interviewed our keynote, Dr Julien Cornebise. You can listen to the podcast here. What follows is an edited version of their conversation.

Tell us about yourself and Element AI

There are 13 of us at Element AI in London working with NGOs, academics, and international organisations to help solve humanitarian and human rights problems, using the tools of machine learning and AI.

In 2012 I joined a start-up called DeepMind where I spent four years. I was the sixth researcher there. I went through the acquisition during which time we became Google DeepMind. My first two years were on financial research, then two years on creating the healthcare team and working with clinicians and applying machine learning to health care problems. Two years after the Google acquisition, I left and started volunteering with Amnesty International, detecting, using artificial intelligence, destroyed villages in Dafur on satellite imagery. At the United Nations AI for Good Summit, I met the co-founders of Element AI, especially Yoshua Bengio (who recently received the Turing Award for his work on deep learning) and Philippe Beaudoin, and realised they were creating this really exciting company, enterprise software with AI, and from the day one, wanting to have a full team dedicated to AI for Good. So I started building that team.

2018 ACM A.M. Turing Award Laureate, Yoshua Bengio, Co-Founder of Element AI

Element AI is mostly based in Montreal, Toronto and we also have offices in London, Singapore, and Seoul. In London there are 13 of us, working with NGOs, academics, and international organisations to help solve humanitarian and human rights problems, using the tools of machine learning and AI. It’s been quite an exciting 18 months.

How would you define AI?

There’s a lot of hype around the term. I would define it as a goal that keeps moving. In the 90’s you would say, building an algorithm that can build a computer that can beat Kasparov, the best chess player, well, that would be real intelligence, achieving 97. But now it’s just software. The same with the things your phone does all the time, recognising your voice, being able to auto-zoom on a picture. Twenty years ago that was AI and now it’s, you know, it’s just my phone.

So the goal posts keep shifting but the current toolkit that powers the latest development in the field is called machine learning, which is essentially algorithms, where instead of telling the computer how to do something, we tell the computer how to learn from examples. In its most basic form that’s what machine learning is and this is what has driven the renewal in artificial intelligence this time around. This is where my technical speciality lies, in machine learning.

Kentaro Toyama…is very clear that any technology…is not a solution, it is merely an augmentation of human intention and human behaviour.

What is an example of good AI?

It’s about looking at how the tools of machine learning are put in the hands of what good people are doing and for me that’s the key part.

There are many definitions and it depends whether you use what we currently define as AI, or what we defined as AI years ago. If you look at a pacemaker and the regulation of the electric signal in the heart with the most basic algorithm, well, 50 years ago that was considered artificial intelligence. Nowadays if you look at an AI that has a good impact, there’s the work we have been doing with Amnesty International, for example. It’s about looking at how the tools of machine learning are put in the hands of what good people are doing and for me that’s the key part. Kentaro Toyama, he used to be the ‘Mr Information and Communication Technologies for Development’ at Microsoft and he has written this brilliant book called Geek Heresy. He is very clear that any technology, in our case we’re talking about AI, today, but any technology is not a solution, it is merely an augmentation of human intention and human behaviour.

On working with multi-disciplinary teams

Our design research team…is extremely multi-disciplinary, and is what allows Element AI to think of AI being centred around a human.

My colleague, Jason, he’s a sociologist, and he’s doing design research at Element AI. He’s an ethnographer, and a qualitative field researcher, on the one hand, and a data scientist on the other hand, and we also have an anthropologist on our team, as well people from psychology, information science and design backgrounds. Together, they create the team of design research, which is extremely multi-disciplinary, and is what allows Element AI to think of AI being centred around a human and being able to work on finding, analysing the needs, analysing the goals of what we can try to solve. That is from day one, before you put AI to use in a concrete product.

And I am being careful here to separate the use of AI for a product to solve a problem versus the research of AI which is developed in terms of, how can we push the technical capacity, how can we push the envelope. And the real excitement is how to put the two together to get really innovative products.

What can anthropology and sociology contribute to AI?

The ability to interview, do qualitative field research, understand what people’s goals are, what motivates them and how today they go about solving these goals, when they encounter difficulties and where they derive meaning. All of this is what I have seen Jason doing, what I have seen some of our design team doing. Understanding what are people’s pain-points, what are their aspirations. From that, we work together, along with the technical folks in our team to see what is feasible, what would that roadmap look like, where the tech would be helpful for these points, where it would not. And this is all driven, you know, by qualitative field researchers.

How did you end up with social scientists on your team?

Element AI was founded with the desire to be as inclusive as possible, to really attract people with very diverse backgrounds. Jason explained that when he started first working for a technology company, he wasn’t hired to do field research. In the process of doing other work, he used his research skills to find lots of insights from the field that became very valuable for people building the technology back at the office.

It’s people from very different backgrounds…it’s diversity that brings all the really interesting ideas up.

And so when he started doing that, in addition to his regular job, very quickly the team in that tech company said, hey, this is super valuable, drop the other stuff we’ve asked you to do and focus on the research needs and these opportunities and then put them into a prioritised roadmap of the things we need to build. So Jason’s recommendation to anthropologists who would want to go into tech is to find a way into the organisation, and then from within, show that their skillset actually creates value. We were lucky to hire Jason at Element AI.

We also have a third person, a fellow in residence, an anthropologist, who is working on ethics, at Element AI. Personally, that’s what I look for in a team, you know, it’s people from very different backgrounds, it’s not a mono culture, it’s diversity that brings all the really interesting ideas up.

How do you see the connection between ethics & AI?

The applications of artificial intelligence is fraught with ethical challenges, like any new technology.

The AI for Good team’s job is to take all the tech we develop as a company but also as a machine learning or AI community, and see how we can use that, by collaborating with domain experts, to try to solve the problems that we as a society, as a species, are facing.

As a scientist, as a techie, I might have opinions about environmental problems, human rights. Instead I work with domain experts, they might not have an idea about how tech and AI can help, and then by working closely together, we work out how we can help them.

The applications of artificial intelligence is fraught with ethical challenges, like any new technology, even more so in that AI is so, I would say, transformative and can be so flexible and therefore can affect many different fields of application but as part of AI for Good, essentially there are two threads. One is about applying AI to good causes and the other is ensuring that what we do as an AI software company remains ethical and try to minimise the number of things we can get wrong.

Our CEO wanted to start an internal discussion around ethics. The first thing to ask is, where are we right now, where within the different teams, what is the different thinking around ethics, what are our blind spots. And Jason and I started working on that. Jason, together with his colleague, brought their incredible experience of field research to speak with people throughout the company, and ask them, what are your concerns about ethics, do you have any concerns about what you are doing, what do you foresee could go wrong, what do you think we’re doing right now that is already helpful in that sense, what should we amplify, collecting this global knowledge. Because essentially a company is a bunch of humans, you’ve got all these brains together. The sociological and anthropological perspective and skills and this field research brought all these ideas together.

How do you define what is ‘good’?

I have a Western bias, I am a white male in London…so I try and surround myself with as many people who have different backgrounds as possible.

The definition of good is extremely cultural dependent, what one culture considers good might be considered really weird from another culture so trying to say, let’s define what is good and find a universal set of values, well, this is something I am really not equipped for. I know I have a Western bias, I am a white male in London, so I’m aware of that bias, I’m not aware of how deep it runs, so I try and surround myself with as many people who have different backgrounds as possible. I work hard to broaden my view but I also acknowledge that I am going to have bias, whether it be conscious or unconscious.

It’s not a tech guy who is going to define what it is good. The tech guy will follow what is good and maybe point to some technical ways to go forward or some limitations, oh, this tech is changing, here, what are we meaning by good, we need to be more precise, how do we do that? That’s how I define it, that is, I don’t. I follow what has been defined as much as I can understand it by society and the people who know best.

What will you speak about at the conference?

My talk will be around AI for Good, with a question mark, because there are so many questions around how to do AI for Good. I will share some of the experience that I’ve had in applying AI and putting it to the service of those who know what they are doing better than I would. I’ll give a few examples of work we’ve done with Amnesty International and with human rights experts. But I don’t want to give the impression that this is the only thing we do.

On talented techies who should be working on meaningful projects

How can we ensure that the relatively scarce talent in AI can be put to help solve, contribute, and work towards solving the main challenges we face as a society.

The discussion that I’d like to generate at the conference is how to ensure that the relatively scarce talent in AI can be put to help solve, contribute, and work towards solving the main challenges we face as a society. There are around 30,000 AI experts in the world. Which is relatively small given the appetite for them. And a lot of these AI experts, machine learning experts, are in companies who can afford to pay them.

One of the early employees at Facebook, who was interviewed a few years back, said some of the best technical minds of their generation are spending their days on finding how to get people to click on ads, and this is really sad. And that’s kind of the question that I want to discuss, can we show people who have technical skills, can we create a way for them to use their skills in a sustained way towards these challenges that we need to solve as humans. We’re all looking for purpose, more and more. Fifteen years ago you would go to a company because oh, there’s free massage and there’s free food, great!

But the world has moved on and twenty years forward, we’re looking at, ok, how do I find a purpose. So if we can show to different companies that their employees demand to work on meaningful applications, and that it is in the interests of these companies to do so, well, then, suddenly we could tap into much more talent than has been applied to these problems. And that’s kind of what we’re trying to do here, showing there is such a way, and we’re starting to see some of that.

There was a fantastic article in last year in The New York Times looking at tech employees demanding to know what they are working on will be used for. I think this is a unique moment, where there is scarce talent, strong desire for values. If we can show a path to act on this desire for values, I think we have a chance to advance and change things.

On bias in tech and mitigating it

I’m so much looking forward to this conference in Bristol!

Making sense of society and how we learn, especially as machine learning comes more and more trying to learn from historical data, there is a risk of it capturing the biases already present in our society. Take for example, predictive policing, which is completely biased because the data that is fed to it is completely biased. It makes me extremely proud to see that there are data scientists and machine learners who look at that, and say, ok, that one we really got wrong, let’s fix it and find better ways to use machine learning for society. This is what I see as really exciting, there are these tools and by using them, not in isolation, but as part of a complex weave of skills from all walks of life and all disciplines, we have a chance to get it right and keep trying, cutting it when we get it wrong, stopping it until we get it right. I think that is extremely exciting. I’m so much looking forward to this conference in Bristol!