Ethical AI standards for DoD unveiled at Georgetown

Published November 3, 2019

A group of tech leaders and academics appointed by the United States Department of Defense (DoD) unveiled newly proposed ethical principles for its use of artificial intelligence (AI) at Georgetown University on Oct. 31. Known as the Defense Innovation Board (DIB), the group also offered recommendations to modernize the Department’s data technology, network security, and workforce.

This announcement comes on the heels of new attempts to implement AI within the DoD, such as through the Joint Artificial Intelligence Center. “We’re very, very proud about Joint AI Center, which we had a little bit of a role in suggesting and promoting,” said Eric Schmidt, former Chief Executive Officer of Google and chair of the board.

However, detractors have raised concerns about the potential of AI to cause damage in a military context. Last year, thousands of Google employees protested the company’s work for the DoD’s Project Maven, which used AI to analyze the immense quantities of surveillance footage collected by the DoD. 

“Building this technology to assist the U.S. Government in military surveillance — and potentially lethal outcomes – is not acceptable,” a letter signed by the employees reads.

The DIB, in crafting these principles, spent 15 months engaging with various stakeholders and members of the public. “We saw people from very far on one side to very far on the other side about what is an ethical use of AI, up to and including people who believe there is no ethical use of AI,” said board member Michael McQuade.

The finished principles state that the DoD should use AI in a way that is “responsible,” “equitable,” “traceable,” “reliable,” and “governable.” These principles will now be transmitted to the Department, which can accept, modify, or reject the recommendations.

“There are certain things about AI that we do believe represent fundamental differences to just another piece of technology,” McQuade said, highlighting AI’s potential to make important decisions without human judgement and to make these decisions in a biased manner.

AI systems have already been observed exhibiting biases against various groups. “DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems,” the principles read. 

In addition, the board discussed the difficulty of ensuring that AI systems behave as intended. “They’re capable of exhibiting and evolving kinds of behavior that are very difficult for the designer to predict,” board member Danny Hillis said. “I think this is actually one of the most dangerous potential aspects about them. I’ve had experience of trying to govern these systems and them discovering very tricky ways around the rules I try to put on them.” 

The proposed principles state that humans or automatic processes should be able to turn off AI systems displaying unwanted behavior.

To better implement and regulate AI, the Board called for further research and development of these systems. “The technology is still very much still under development, it’s changing every day,” board member Daniela Rus said. “Investment in pushing this change forward so we can get better foundations and more trustworthy behavior is important, and this has to be considered.”

Going forward, Georgetown will likely continue engaging with research and policymaking around AI. A new research center, the Center for Security and Emerging Technology, was launched this year within the SFS to study and advise policymakers on the intersection of AI and national security, and in September, the university hosted a conference on that same topic.

Ryan Remmel
Ryan Remmel was an assistant news editor for the Voice.

More: , ,

Read More

Comments 1