Role Models in AI: Nigel Duffy

As told to Tess Posner of AI4ALL by Nigel Duffy

  • Role Models in AI
Role Models in AI: Nigel Duffy

Meet Nigel Duffy, the Global Innovation Artificial Intelligence Leader at EY. Nigel’s experience as a machine learning engineer, a founder, a CTO, and an author have given him a broad perspective on the development and use of AI today. We interviewed Nigel as a special edition of AI4ALL’s Role Models in AI series, where we feature the perspectives of people working in AI. Check back here on Wednesdays this winter for new interviews.


As told to Tess Posner of AI4ALL by Nigel Duffy

TP: As the Global Innovation AI Leader at EY, what are some of your priorities coming into 2018?

ND: My goal is to figure out how we can transform the EY business using AI. As a result, I spend a lot of time connecting with people throughout the organization to understand what that transformation should look like. Those are pretty broad-ranging conversations — everything from how do we reduce the burden of mundane work to how do we reimagine the business and reimagine the value we deliver given access to these kinds of technologies.

The other big responsibility I have is building a team that can deliver technology solutions to EY.

Technology is so much more about communication and collaboration than I ever imagined as a kid. In fact, communication skills are as important as the pure technical skills. You have to be able to articulate yourself and you have to be able to listen.

Check out this recent interview about the future of work with Nigel Duffy plus Tess Posner and Fei-Fei Li of AI4ALL and Jeff Wong of EY.

What types of skills are in the greatest demand in AI today, from your perspective?

There are two perspectives on this: a technical perspective and a product perspective. From a technical perspective, the reality is — and this might be a controversial opinion — the most important technology in AI is machine learning (ML). ML is a reasonably technical discipline, and you have to have a decent math background to really get it. Those math skills are pretty transferable, though. For example, you could have a background in physics or statistics or operations research and transfer those skills to machine learning.

I think there’s a bigger skills gap on the product side than on the technical side. Where are the product managers who understand enough about the technology to know what’s possible, to connect that to a business problem, and to come up with a solution? We need more people developing the skill to say, “Here’s my business problem. How can I look at it in a way that allows me to solve it with AI as it exists today? What does that solution look like?”

Where do you see the opportunities for increasing diversity and inclusion in AI?

There are a few different things to think about here. One is values.

What values are getting encoded in the systems we build, or in the decisions we make about how to regulate or deploy these systems?

For example, machine learning can amplify bias. If you have bias in your data, machine learning can produce a model that not only has the same bias but amplifies it. Some interesting recent work from the University of Washington looked at a diverse set of photographs with the goal of predicting whether the person in a given photo was cooking or not. In the dataset, there were 33% more women cooking than men. The model predicted that it was 68% more likely for a woman to be cooking than a man when it was shown new photos of someone cooking.

So how do you deal with bias amplification like that? First, it’s important to define the goal. Is the goal to remove all bias? That’s hard to do well, because machine learning systems use information to choose between outcomes — and that’s essentially the definition of bias. So, we need to prioritize which biases we’re going to reduce or minimize. This means we need to ask ourselves what values we want to bring into that discussion. It’s really important to have a diverse set of people in values conversations, because my values are no better or no worse than anybody else’s, but they’re mine. If the community is only made up of people like me, then we’re going to miss a lot of really important perspectives on what the right values are.

It’s also important to consider access and opportunity. Many of the opportunities for a good life today are based on your ability to deal with and work with technology. If we want to have a more equitable society, then it’s really important that we have more equitable access to those opportunities. How do we get that if we don’t have a diverse population of kids learning and contributing to these technologies?

In addition to diversity and inclusion in the field, what are some of other challenges you see in the way AI is being created today? Do you have any recommendations for people who want to start addressing those concerns?

When we talk about AI, we often conflate two perspectives: the aspirational and the pragmatic. The aspirational perspective is all about embodying intelligence in some kind of machine. Pragmatically speaking, we’re a pretty long way from achieving that goal.

With that in mind, instead of focusing on “end of the world” scenarios, or far-off goals, we should focus more on real risks about AI technologies that exist today. Today’s risks and concerns include things like mitigating algorithmic bias and having values discussions about what goal we as a field are trying to achieve. Another challenge is around regulations. If we come up with naive regulations, then we’re going to stall the adoption of these technologies that have the potential to have a big positive impact on the world. If you think about healthcare, I think it’s reasonably widely accepted now that a lot of radiology is going to be done with deep learning within the next few years. We need to deeply think about things like what the impact on the cost of healthcare will be and how we’re going to think about diagnoses made by an AI from a regulatory perspective. These technologies can be the solution to many problems we have in society, but not if we let fear get in the way.

If you’re a business and competitor adopts a technology and you don’t, that might cause you significant problems. If you’re a country and your competitors adopt a technology and you don’t, I think that will have potentially long-term impacts on the economic health of that country.

Why does AI excite you? Where do you see the greatest potential?

A lot of conversations around AI are looking at the negative side of things. We sometimes forget why we’re in the field in the first place. We’re in the field because the potential for positive impact is enormous.

AI is going to change the world, and we have the ability to make it change the world in good ways. I’m excited by that.

If these technologies live up to a fraction of their potential, then we can have a huge impact on issues like climate change, healthcare, poverty, and inequality. The way to achieve that is by including more voices in defining “good AI” and by refocusing the broader conversation on the positive impacts of the technology.


About Nigel

Nigel Duffy is the Global Innovation Artificial Intelligence Leader at EY. He leads the application of AI throughout EY, helping it to be effectively leveraged across the organization. He is also responsible for expanding and further strengthening EY’s relationships with start-ups and academic and business communities worldwide. Before EY, Nigel Duffy led the research, development, and commercialization of Sentient’s Artificial Intelligence technologies. A recognized expert in machine learning, Nigel was previously the co-founder and CTO at Numerate Inc. where he led technology development and managed the application of Numerate’s platform. Nigel invented Numerate’s core technologies which designed novel drug candidates for diseases.

Additionally Nigel was VP of Engineering at Pharmix and worked as a research scientist at AiLive (developer of the Wii Motion Plus), where he applied machine learning to computer games. Nigel also spent time at Amazon A9 working on tools for large scale analytics in product search.


Follow along with AI4ALL’s Role Models in AI series on Twitter and Facebook at #rolemodelsinAI. We’ll be publishing a new interview with an AI expert on Wednesdays this winter. The experts we feature are working in AI in a variety of roles and have taken a variety of paths to get there. They bring to life the importance of including a diversity of voices in the development and use of AI.

Help ensure a human-centered and inclusive future for AI by making a tax-deductible donation today. Gifts will be matched for the month of December!