As part of our podcast series, we spoke to Phil Harvey, Senior Cloud Solution Architect for Data & AI at Microsoft’s One Commercial Partner organisation. In this interview he talks about smart cities, responsible AI, and why soft skills like listening are so important. Phil is talking in the Smart Cities stream at the 2020 conference.
Your speaker page details a very interesting journey through different industries. Can you tell us how you got to where you are now?
Yeah, sure. I grew up in the West Country, on the south coast and my first actual job was an IT support person in an artist media lab. So that’s where I started off strangely, and then I went to the University of Sussex to study a Bachelor of Arts in artificial intelligence, which I don’t think they do anymore as an Arts degree. It’s now all Science degrees. But we did psychology, philosophy, linguistics, as well as programming.
Then I spent some time in the architecture industry because of an interest stemming from some family history. Being a Computer-Aided Design (CAD) technician, mostly. After that I moved into architectural visualisation and render farm automation. There I got deeply technical helping artists be more productive with visualisation tools.
Around 2008 I moved industry, when the building industry had a bit of a wobble, with everybody else, into the online media and advertising world. And that’s where I really started to work in data in a very focused way. That was for a business that I started off, bought the servers, built everything up from scratch myself, for them.
Then around 2011, I had the opportunity to become technical founder and Chief Technology Officer of the technology startup called Data Shaker. They’re still going strong. We were working on a data unification technology to help with what’s currently called DataOps. I personally made the choice to move to Microsoft around 2017. And that’s where I’m now as a Senior Cloud Solution Architect for data and artificial intelligence.
Can you tell us about your role at Microsoft?
I’ve been particularly interested in AI for Good. The actual work involves consulting, advising, design sessions, hackathons, and things like that, to make sure people are using data and artificial intelligence technologies in the best possible way.
Definitely, so I work in what’s known as a partner organisation in Microsoft called One Commercial Partner. It’s a really interesting place to be because in my role I get to work with a lot of other companies who are using Microsoft’s Cloud data and artificial intelligence technologies.
This can be anything from global consultancies in media organisations, all the way through to startups. And I’ve been particularly interested in AI for Good. The actual work involves consulting, advising, design sessions, hackathons, and things like that, to make sure people are using data and artificial intelligence technologies in the best possible way.
You’re speaking in the Smart City stream at the 2020 conference. What kind of work have you done in this area, and how does your experience tie in with this?
I think there’s an easy answer and a hard answer to that one. The easy answer is because I work in a partner space, I work with a lot of different companies who stretch across smart cities, through to processing the data comes off the back of that, inside buildings with sensors, all the way up the use of artificial intelligence in that space.
Most recently, I’ve been working on an air quality project which is directly related. But air quality from a data perspective is all about volumes of things in places. So it stretches across quite nicely into all different areas of built environments.
The hard answer? AI ethics, especially in smart cities, requires talking about data that directly relates to people’s lives and the way they live their lives in physical spaces. And that is data about people. So making sure that anybody processing that data is handling it ethically is a really important piece of our puzzle, I think.
At the techUK Digital Ethics Summit last year you touched upon the importance of interdisciplinary research. Can you tell us more about this?
You need to get perspectives from anthropologists and sociologists — what’s known as the digital humanities now — to make sure those building the technology are doing so in a responsible fashion.
This is right at the heart of what I think is important in the world of AI now. It very much relates to what I was talking about before in the introduction of AI ethics at the summit, which isn’t one thing or one technology. It’s not like a shovel or a robot or those things. It’s a range of technologies that all interplay in a bigger system, and in that you need to cross those disciplinary boundaries.
You need to get perspectives from anthropologists and sociologists — what’s known as the digital humanities now — to make sure those building the technology are doing so in a responsible fashion. And this is the reason I talk about this. Because as a programmer for 15 years, and having various technical jobs under my belt, I saw this need; that we need to lift our heads up and have that wider conversation.
What does the concept of responsible AI mean to you?
I think there’s a straightforward answer to this question. We have a right to advance the art of technology in many different ways, and use what we can to best advantage. But that comes with this responsibility. We have a responsibility to the people who will be affected by this technology, whether we are thinking about smart cities, people living their lives, all the way through to people in online spaces.
We need to think about how that AI is built, especially, so with diverse teams who are able to have that wide perspective. But also, we must think about how we build ethical AI. AI that doesn’t fall into traps of bias. All the way through to — in some point in the future — how we treat these artificial intelligences in a way that is respectful of the part of our lives that they’re becoming.
So being responsible means having those principles in place, and having thought through these things in a diverse and interdisciplinary environment, to make sure we’re making use of this technology in the best possible way.
How do you think more businesses could be encouraged to implement AI more responsibly? As for many it appears to be an afterthought rather than an intention from the outset.
You’re certainly right. It shouldn’t be an afterthought. It should be in the design process up-front. We’ve got to create an environment where irresponsible uses of AI are called out. This is not just at the legislation level, but the personal level. Because irresponsible use of technology, AI, and otherwise represent business risk.
So, if the business or the organisation implementing AI is not doing so responsibly, they’re putting their business at risk. They’re also putting people at risk in our society. We need to make sure that we educate people to have that conversation, to identify those risks, so a business doesn’t fall into that trap of doing something irresponsible because they don’t know exactly what they’re doing.
We need to work on the interface between different disciplines to make sure they can be understood and have a common language.
So, that encouragement is making these interdisciplinary skills a front-of-mind conversation. We need to talk about interdisciplinary work. We need to work on the interface between different disciplines to make sure they can be understood and have a common language. At Microsoft we have something called the Partner Pledge.
This is something our partner companies can sign up to which demonstrates their commitment across a broad range of these kind of principles. From diversity and sustainability, all the way through to responsible AI. We encourage people to sign this to show that they’re taking a stand on these issues.
Would you say that in the areas of work that you’re doing that is very much the case? Is it about designing it into the products and services from the very beginning?
Very much so. It’s a core part of the early conversations I have with any business that I’m working with. Whether that’s top-down from the business level, to help businesses understand risks and how to put their principles in place; or whether it’s about working bottom-up to make sure that engineers can understand and work with those principles.
They’re quite often developed in a business-friendly way — soft skills, if you will — and a lot of that language doesn’t fit with the technical nature of specialisms. People doing deep learning, machine learning, data science…So, making sure the business is able to work in an interdisciplinary way, focusing on the boundaries, and getting those translators in place is extremely important.
What opportunities can you see in the world of AI over the next five years or so, and what kind of things are you excited about?
First up, and this is a key topic for me, is using these new tools to uncover new knowledge in data that we already have. Many people think that to start on AI journey you need to have a plan to generate new datasets, to gather new data. But many organisations, businesses, and otherwise, are sitting on a huge and rich resource of data. This could be pictures, text, numerical data, or any of these things. And as the tools evolve, so does the possibility of getting new knowledge out of those datasets.
We’ve gone from this early-adopter space of people doing the majority of advanced work in-house, through to there being a democratically available set of tools that many more organisations can use. Whether it’s in search, or analytics, or looking at their business in a new way that they can use. That’s a huge opportunity right now. It doesn’t have to be big, and long term, and expensive. There are things you can do right now.
If you think about changing the environment for the positive using cultural heritage assets, you can start to look at how we can use this data to understand people’s mental health, and topics of diversity and inclusion. It’s not always about profit.
Looking at the wider world, if we come back to the topic of smart cities, you’ve got to think about the data assets that are available in the institutions in that place. So one of the pieces I’m very excited about working with is cultural heritage. Unlocking that data to add new richness to our lives as we move around our spaces- that’s only really possible with the scale that AI can bring. Otherwise, it’s a lot of people working very hard for a long time just to find very little.
And this can also extend into the world. If you think about changing the environment for the positive using cultural heritage assets, you can start to look at how we can use this data to understand people’s mental health, and topics of diversity and inclusion. It’s not always about profit. It’s about thinking about people and where they live, and how they can be helped. There’s huge opportunity in there.
What do you think of the key challenges in terms of making that happen?
This is where I do a bit of a switcheroo. So, I talked about all these wonderful data assets that are already there. The big challenge is getting access to that data in the right way. What’s known as data engineering and data governance and data quality. That is a core skill in the world of AI, especially when it comes to looking at gathering the datasets you already have.
Skills in that area are a key challenge at the moment. People who understand both the interdisciplinary work, and the AI work, but also how to unlock these datasets — I think that’s where we’re going to very quickly see the flip.
People aren’t going to know who to hire. And with what skills? There’ll be many people who can do different things, but how do you hire a digital sociologist? How do you get your boards to agree that budget? How do you work with anthropologists to implement that new AI technique or technology against data you already have, when they don’t know how to code? Should they? How do you translate to a coder?
So, we’re starting to see this whole new set of roles emerging, a whole new way of thinking about the implementation of technology, which will be a challenge for business. They will need to keep up on their skills journey, and understand how to hire and train people to take advantage of this new space that’s opening up.
Tell us about your book, Data: A Guide to Humans, that is being published in January 2021. What prompted you to write the book, and what are the key topics that you discuss in it?
Being able to think about the needs and feelings of people, and understand them in relation to the data work that you do is of key importance. And it’s not always taught.
I’m going to start here with key topics. So, I mentioned the word empathy earlier in this conversation. What we focus in on in the book is called ‘cognitive empathy’, and we go through how that fits into the overall topic. But it’s a rational and conscious way of understanding the needs and feelings of other people. It speaks right to the heart of the conversation we’ve been having here, because data represents people, it represents things, and lives, and the way things are happening out in the world.
Being able to think about the needs and feelings of people, and understand them in relation to the data work that you do is of key importance. And it’s not always taught. You may have an astrophysicist with a PhD, but they’re not necessarily schooled in the humanities or the digital humanities, or any of these softer skills. The book was trying to cross that boundary.
The motivation was the work I’ve been doing for the last four or five years with people transitioning out of the hard sciences, into the world of data science and business. Looking right down to the granular examples of data Application Programming Interfaces (APIs), and how you can spot a lack of empathy in that API.
I developed techniques to get it fixed without having that technical argument — “This is bad”, “I don’t like it”, “You should have done it like this” — which people find on the internet in forums, and all over the place. And in the large part my co-author Dr. Noelia Jiménez Martínez was a massive positive influence on getting that produced. Working with her gave me a whole new perspective on how we can integrate the conversation about the climate emergency and all of these things into this idea of cognitive empathy, and understanding perspectives of other peoples.
What kind of skills do you feel are critical today particularly in relation to technologists and data experts?
The easy answer to that is empathy, and if you read the book post there’s loads of stuff in there, but it’s a huge topic. What I’ve found is when I open myself up to people using these skills, and thinking not as a technologist, but as a rounded person who wants to cross these boundaries, you meet user researchers, digital sociologists, anthropologists, and you find new perspectives.
There’s a couple of examples in this. Archaeologists are an amazing set of scientists. When you really start to listen to how they explore ancient spaces, you learn a new way of thinking about approaching a technical system you’ve not known before. You learn how to uncover new details to think about the way the system might decay or break down, which helps you think about how you end-of-life a technical system. And this is a skill that you don’t get taught when you’re learning how to build a system.
Most recently, the listening skills of linguists is something I’m really interested in, and the way that somebody has to go about learning a language from scratch with no context. When you go into a new company or a new business and you’re expected to be able to help them with technology, there’s a lot of comparison there. You have to work out a lot of similar things.
And so that brings me down to the one key takeaway for me, which is listening. You’ve got to two ears and one mouth and you should use them in that proportion. I think that phrase goes all the way back to Epictetus in Ancient Greece. But for me, that’s what it comes down to. Can you open yourself up and just listen?
The conference brings together social scientists and technologists. Is that something that is your personal interest, or something that you do within your job, to help make your work better or to take it in a new direction?
My technical skill is something that I was very proud of at one point, but the more I’ve opened myself up to people, the more I’ve learned and listened from different disciplines, the more rounded and capable I am.
That’s an absolutely wonderful question, and one that’s really difficult to answer. I truly believe at a personal level that having this kind of interest makes me better at the jobs that I do. My technical skill is something that I was very proud of at one point, but the more I’ve opened myself up to people, the more I’ve learned and listened from different disciplines, the more rounded and capable I am, and the quicker you can get into those situations.
Social scientists have a huge amount to say about AI ethics. Even down to philosophy, and history, and all those different pieces. But you’ve got to be able to accept that new language and new way of looking at it, and look at the whole system that you’re talking about. So yes, it’s personal, and it’s something that I hold very dear now.
I find it a key part of my work. Now, when I go into a business, I’m open and ready, and if somebody starts talking about their background in astrophysics, or linguistics, or they’ve got an MBA etc., you learn to pick up that language and learn to understand them quicker.
You can find out more about Phil’s work, and get in touch, here.
You can listen to the full interview here.