The White House recently released guidelines on the development and deployment of artificial intelligence technology by federal agencies.
While privacy and safety concerns surround the growing use and development of AI tools, the White House seems to be taking a softer approach toward regulation of the technology.
In its memorandum published Wednesday, the White House said “federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”
Simply put, artificial intelligence is a branch of computer science that trains machines or software to operate and solve problems as humans would.
“Artificial intelligence is building systems that do things that if a human being were to do them, you’d consider them intelligent,” said Kristian Hammond, computer science professor at Northwestern University. “So things that can read, produce text, that can learn, that can drive, wander around the world, answer questions – these are all things that are sort of central to human reasoning.”
AI allows smart speakers to recognize human commands and self-driving vehicles to operate hands-free, for instance. Many of these systems are fed large amounts of data and instructions in a process called deep learning, which helps the software interpret and respond to new stimuli.
The White House’s memo was published nearly a year after President Donald Trump signed an executive order forming the AI American Initiative. Hammond said it essentially formalizes a hands-off approach to the technology.
“They’re essentially saying, ‘Look don’t do anything that gets in the way,’” Hammond said. “The notion behind these guidelines is that these are technologies that hold an advantage for us both in terms of commerce and in terms of security.”
But unshackled AI technology could have troubling outcomes. In a November 2019 article, the New York Times exposed how the Chinese government uses surveillance cameras and facial-recognition software to track its population of Muslim minority Uighurs.
The memorandum hints at potential federal overrides of AI legislation enacted by state or local governments.
“In some circumstances, agencies may use their authority to address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market,” the memorandum reads.
San Francisco banned the use of AI by police and other governmental agencies in 2019, but Hammond said the technology could be valuable in certain public safety scenarios.
“People might think there are privacy and misuse issues, but those are very abstract,” Hammond said. “But then there’s that moment when you have a child disappear and the moment that child disappears, I want control of every single camera in my city and I want to have my child’s face [in the software] and I want to be able to find that child.”