Technology

AI: The End of the World, or the Start of a Better World?

Jones Day partner Bob Kantner, IBM Global Solution Center engineer Romelia Flores and Cyxtera’s Brainspace Division CEO David Copps discuss the benefits and risks of advancements in artificial intelligence and what to expect in the coming years. This piece, adapted from the podcast Jones Day Talks Technology, has been edited for length and style.

Jones Day Talks: Let’s start with some basics. Romelia, what is artificial intelligence?

Romelia Flores: We use a couple of different terms. There's artificial intelligence and there's augmented intelligence. Artificial intelligence is when a computer is capable of thinking and behaving like a human, including making independent decisions. Augmented intelligence also involves a computer thinking like a human being, but in support of humans to help them make intelligent decisions based on the available data.

What has IBM's focus been in the artificial intelligence space?

Flores: Our major AI capability is known as IBM Watson. IBM Watson has existed for quite a while, even prior to February 14, 2011, when it debuted on television. Our research teams were looking at deep learning and natural language processing to create a system to intelligently play the game of Jeopardy. We weren't playing Jeopardy to have fun. We were doing it to advance technology.

What does Watson do? How does it use AI?

Flores: Watson not only responds to questions but also offers insights into how accurate its responses to questions are. Watson identifies its data sources, assesses their accuracy and explains what it was able to pinpoint in those data sources.

Are particular industries or sectors embracing augmented intelligence?

Flores: We’re doing a great deal of risk compliance work in the banking and insurance industries.

Dave, anything you'd like to add to Romelia's description of where AI is now?

David Copps: AI is an apex technology. AI will drive revolutions in nanotechnology, biotechnology and information processing. AI will transform industries such as healthcare, tech and finance. As consumers, we already interact with AI every day in our buying, in our navigation and in other ways. Through conversational AI technologies – such as Siri, Cortana and Alexa – we interact with AI in the cloud.

In business, we use AI to better understand our customers. We’re also using AI to protect our organizations from external and internal threats.

Mark Cuban sees tremendous potential in AI, but with some caution. Elon Musk thinks AI could be the end of the world as we know it. Bob Kantner, these smart and successful people can't both be right, can they?

Bob Kantner: I am more in the Cuban camp with one possible exception. I agree with Romelia and Dave that AI will improve virtually every industry. For example, the augmented intelligence that AI can provide to healthcare professionals is a definite benefit. The one area where I have concerns is the potential military use of AI. Otherwise, I see AI as a big benefit.

Dave, what do you think of Cuban's point of view?

Copps: Neither side is completely right nor completely wrong. The possibilities with AI are almost unfathomable. We'll be able to do things that we never thought were possible. But there is a dark side; autonomous weapons making kill decisions is a very scary thought. Also, we're treading new ground with gene editing technologies like CRISPR. Combine that with AI and we have the ability to transform humanity.

Dave, tell us about some of the other practical applications for AI in the marketplace right now and how the Brainspace Division of Cyxtera is applying AI.

Copps: My focus has been on investigations and cybersecurity. Organizations are saddled with the challenge of securely storing the data they are aggregating and then using AI to surface threats from bad actors who are working every day to harm us.

Dave, what's coming in 2018 for AI and for your company?

Copps: Sensors will produce more data than we have ever generated before and we'll be learning more from that data than ever. More data will accelerate automation. What's coming next for us in cybersecurity with Cyxtera is the transformation from transactional analysis, or analyzing data in batches to understand who said what and when, to building a platform focused on being more predictive and preventative. We move from finding hot documents to identifying meaningful patterns in data as they happen so we can identify fraud and security threats before they occur.

Romelia, what do you see coming for AI?

Flores: We're at our infancy in AI. I see us delving far deeper into machine learning and evidence-based production of information. That's critically important. I also see the application of augmented intelligence across all industries. We’ll see the incorporation of technologies that include augmented intelligence – blockchain for supply chain management, the internet of things for devices and quantum computing with cybersecurity.

Bob, from a legal perspective, what do you see coming for AI?

Kantner: I'll give you two legal challenges that AI will face. The first is bias. HR departments take note. In 2015, Google Photos tagged several African-Americans as gorillas. In 2016, COMPAS, an algorithm used in sentencing criminal defendants, was reported to find that black defendants pose a higher risk of recidivism and white defendants a lower risk of recidivism than they do. In many cases, the problem is insufficiently diverse training data; but any time algorithms draw conclusions based on probabilities, there can be issues. Algorithmic auditing would help. Tech companies are trying to address this. For example, Facebook has announced a tool called Fairness Flow that will warn if an algorithm is making an unfair judgment about someone based on their race, gender or age.

And the second legal challenge AI will face?

Kantner: The possible imposition of a right of explanation. Last year, Will Knight, who writes about AI for the MIT Technology Review, wrote: “You can’t just look inside a deep neural network to see how it works.” Google’s AlphaGo defeated Lee Sedol, a world expert at Go, a board game with millions of potential moves, with an unprecedented move the developers of AlphaGo could not explain. Board games are one thing. Making decisions on employment and loan applications are another. In Europe, the new General Data Protection Regulation (GDPR) Recital 71 states that automated processing systems that affect legal rights should be subject to suitable safeguards, including the right of data subjects to obtain an explanation of the decision reached. Article 22(1) states that a data subject shall have the right not to be subject to a decision based solely on automated processing. A broad right of explanation would pose a challenge to developers and users of AI; but many organizations, including the U.S. Defense Department, Harvard University’s Berkman Klein Center and companies such as Google are working to establish greater transparency in AI. For example, the Berkman Klein Center has focused on identifying the reasons for a decision by AI and whether changing a factor would change the decision.

Will there be regulation of AI in the U.S.?

Kantner: If AI cannot be made more transparent when it controls things that can injure and kill humans, such as autonomous vehicles, that is a distinct possibility.

The views expressed are the personal views of the participants and do not necessarily reflect those of the organizations with which they are associated.


Bob Kantner is a partner at Jones Day. He organized a working group at the company focusing on AI and robotics, and he serves on another working group focused on autonomous vehicles. Reach him at [email protected].

Romelia Flores is an engineer in IBM’s Global Solution Center (GSC) and a certified IT architect. She works with clients in designing Smarter Planet solutions.

David Copps is the founder and CEO of Brainspace and the co-founder of Engenium. He serves as a director of Memory Reel.

Published .