Questions Board Directors Should be Asking about AI

Donna Wells
8 min readSep 17, 2018

--

Artificial intelligence at its core is intended to make people’s lives easier and more efficient. But if a company’s technology harms users and lands the business in court, the efficiency becomes a lost cause.

This is where boards need to be paying attention and asking the right questions, experts say.

Advocates of AI can provide a long list of why and how the technology can be helpful. For example, at Facebook Inc., algorithms are behind the dissemination of news stories.

The feature can be helpful for users who prefer to see only news stories that might be relevant or interesting to them.

On its face, the technology seems to be convenient for news consumers. The problem, of course, is that the technology is created and driven by humans — with all of their flaws, among the worst of them being bias. This year, executives from the social media giant, including CEO and chair Mark Zuckerberg, have now been hauled before multiple congressional committees both publicly and privately to discuss several issues, including how the company’s AI-driven news dissemination, fueled by users’ preferences and biases, could have inadvertently spread the flow of foreign propaganda and planted fake news stories during the 2016 presidential election.

But biases in AI don’t have to be inevitable, experts say.

Donna Wells, who sits on the advisory board at Mitek Systems Inc. and on the boards of Betterment and Happy Money , thinks “the smart boards” are making technology education a priority.

“Talking about data and AI in the boardroom is so important,” says Wells, a former Intuit executive. “The more we entrust algorithms to make societal decisions, the greater risk we take.”

She explains that there can be flaws in the data sets of algorithms “that perpetuate biases that exist in our society,” so board directors — even those who don’t consider themselves inherently tech-savvy — “need to know the right questions,” such as, “What is the data set we’re training on?”

She points to the widespread lack of diversity among tech companies as one example of how things can go wrong in AI.

At Mitek, an identity verification technology company, she explains, the company has technologies using facial recognition information.

“If a company is running recognition algorithms just using a database of its engineers, guess what? It’s going to be disproportionately training on younger, male, Caucasian and Asian American faces. And its going to perform badly in recognizing older, African American women.” This could present business and reputational risks to the corporation, Wells states.

Afua Bruce, a former executive director with the White House National Science & Technology Council who is now the director of engineering, public interest technology at nonpartisan think tank New America, agrees that it’s important to use a “wide sample set.”

She says that boards need to consider, “as companies are rolling out these solutions, what assumptions are being built into them?” and “What data is being used to train these artificial intelligence systems?”

Bruce emphasizes the weight of some of the decisions being put into the hands of technology.

“That’s one of the biggest concerns, having biases built into the system,” she says, noting that this problem isn’t limited to a single industry or sector. “It’s across the board. Financial services are definitely vulnerable; AI is being used in different HR departments in hiring and firing decisions, even criminal justice programs.”

Other Considerations

While widening the sample set in underlying data should help, the push for diversity among the workers behind the technology will help reduce bias in AI, Bruce says.

Bruce, during her time in the administration working under President Barack Obama, was an author on a report, “Preparing for the Future of Artificial Intelligence,” published in October 2016.

The administration actively pushed to become a springboard for more diversity among the workers behind the technology.

At the time, the report found, only 18% of computer science graduates were female, down drastically from a “peak” in 1984 when 37% of graduates in the field were women.

Breaking down who is working specifically in AI among that group has proven more difficult. As the report noted, “there is a lack of consistently-reported demographic data on the AI workforce.”

The 2016 report cited numbers from one of the largest conferences on AI research, the Neural Information Processing Systems conference in 2015, which shows only 13.7% of conference participants were female.

“The diversity challenge is not limited to gender,” the report continued. “African Americans, Hispanics, and members of other racial and ethnic minority groups are severely underrepresented, compared to their shares of the U.S. population, in the STEM workforce, in computer science, and in the technology industry workforce, including in the field of AI,” the report says.

Ravin Jesuthasan, managing director and global practice leader at Willis Towers Watson, thinks AI monitoring will be especially relevant for audit committees. “One of the key things if I were a board member is how much of the decision-making in my organization is happening by leaders and managers, and how much is being made by an algorithm?”

The next step is auditing that algorithm “and having the board members regularly monitoring what the outcomes are, where have there been issues, where do we need to engage in remediation and how are we preparing the algorithms to make sure the decision set is free of bias?”

As Jesuthasan notes, it’s not just about the ethical pressures of removing biases; there can be legal and regulatory ramifications.

“I think the risks from a regulatory perspective could be really severe. In terms of discrimination, there are risks in financial services. Increasingly, automated trading decisions are being made. You can’t be alienating customers.”

Moreover, under the EU’s new General Data Protection Regulation, he says, “companies have to be able to explain how technology is making decisions.”

If no one can explain how or why an algorithm made its decision, the technology could become a huge problem for boards.

AI has caught the attention of leaders in the U.S. as well. Congressman Emanuel Cleaver(D–Mo.) sent a letter to the Consumer Financial Protection Bureau last year asking the agency to “investigate ‘fintech’ lending companies.”

“I am deeply concerned that some fintech companies may be using algorithms that shut out hardworking individuals from communities of color from accessing affordable small business credit,” Cleaver wrote in his March 2017 letter. “It is important to determine if minority-owned small businesses are being charged higher rates, or if they have been subject to predatory rates by these fintech firms.”

Then there’s the New York City Council, which passed a bill in late 2017 to create a task force “that provides recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems.”

The legislation to take on “algorithmic discrimination” was applauded by the New York Civil Liberties Union.

“Algorithms are often presumed to be objective, infallible, and unbiased,” the NYCLU wrote on its website at the time. “In fact, they are highly vulnerable to human bias. And when algorithms are flawed, they can have serious consequences.”

Meanwhile, some say concerns over bias in AI may be overblown.

Human Bias

Alex Miller, a Ph.D. student at The Wharton School who focuses on machine learning and decision-making, believes a lot of the attention to the dangers of AI is misplaced.

While Miller acknowledges that AI can be biased, he questions whether these technologies are more biased than humans. He urges corporate directors to be open-minded and review all the facts when looking at decisions made by machines.

In his research, for instance, he has found that many studies that find artificial intelligence to be dangerous aren’t accurately comparing what the alternative would be. He points to an article by investigative journalism nonprofit ProPublica about machine bias and the “software used across the country to predict future criminals” that is “biased against blacks.”

“Obviously that’s not a desirable outcome,” he says. But in Miller’s view, it’s important to ask: “What about the judges they would have had if they hadn’t used the algorithms?”

In other words, he knows that AI can be biased, but he is also cognizant that there’s no proof that humans in the same situation would have been less biased.

“There are clearly some bad things algorithms can do,” Miller says. “But what would happen if the algorithm wasn’t there?”

Miller explains that if a director is considering how well a machine could handle a certain responsibility, they should make sure it is a task that a human can do or has done, if comparing the two.

For instance, Alphabet has taken a lot of heat for its photo search engine “fiasco” where it labeled an African American as a gorilla, Miller says. “This is clearly a horrible thing to have happened, but think about the scale at which Google is deploying its technology. It’s massive and can’t be comprehended by a human.”

Similarly, it isn’t always clear whether human or AI bias is responsible for discrimination; sometimes it will be a mix of both.

For example, in addition to Facebook’s news feed, the social media network deploys targeted ads to its users, built on an artificial intelligence platform. Facebook, which did not respond to comment for this article, has made a small fortune in advertising dollars, but it has had to deal with the consequences of allowing ad customers to limit their target audience in a manner that has been called discriminatory — for example, by allowing customers to limit housing ads only to white users. Last month, Facebook signed an agreement with Washington attorney general Bob Ferguson agreeing to change its advertising platform nationwide after ProPublica reported on the platform’s ability to discriminate against protected groups.

“Facebook’s advertising platform allowed unlawful discrimination on the basis of race, sexual orientation, disability and religion,” Ferguson said in a July 24 press release. “That’s wrong, illegal, and unfair.”

Experts say the focus on bias in AI is not likely to fade. Therefore, Willis Towers Watson’s Jesuthasan notes, “it’s really important in this era to continuously keep learning.”

Jesuthasan says that while AI has its risks, its purpose is really to bring speed and efficiency to the workflow. He suggests that corporate boards strive to understand the “return” on the AI.

“At the end of the day it’s a tool. It’s important we know how to use this tool in its optimal fashion,” he says.

Published by Financial Times Agenda Week and written by Stephanie Forshee.

--

--

Donna Wells

Board Director, Tech CEO, F500 and Mint.com CMO. Working with companies solving interesting problems. Teaching the next generation of entrepreneurs at Stanford.