The promise and peril of facial-recognition technology
In 2011, John H. Gass received a disconcerting message from the Massachusetts Registry of Motor Vehicles: His license had been revoked, and he had to refrain from driving immediately.
Gass hadn’t received a traffic ticket in years, The Boston Globe reported at the time. But he had been involved in a different kind of traffic incident. The electronic version of his license photo had collided with a facial-recognition system used for anti-terrorism surveillance.
A year earlier, facial-recognition systems triggered more than 1,000 police investigations, according to the Globe report. In a lawsuit Gass filed against the state, citing wage losses during the week and a half it took him to get his license reinstated, he estimated that the system could have similarly affected hundreds of people.
Facial-recognition technology, once a science fiction dream, is now increasingly hard to avoid in popular places, systems, and products. It is already incorporated in video game consoles, computing devices, businesses, churches, social-media services, messaging apps, airports, retail stores, and law enforcement systems.
“People don’t really understand the extent to which it’s being used now,” adds John Simpson, director of Consumer Watchdog’s Privacy Project. “All sorts of privacy issues surround this technology,” and “safeguards aren’t being implemented.”
While proponents say the technology can greatly benefit endeavors in personal productivity and public safety, privacy watchdogs are concerned that people will misuse it.
Rules, laws, guidelines, and safeguards are needed, they say, to prevent questionable practices such as fusing databases of driver’s license photos with images captured by stealthy public-surveillance cameras, to address shortcomings such as people getting incorrectly tagged as criminals, and to prevent sensitive databases from falling into the wrong hands.
A wild west of facial recognition
U.S. police departments are “eagerly” adopting facial-recognition software, according to a recent New York Times article, as the FBI prepares to make a biometrics database housing tens of millions of personally identifiable photos accessible to more than 18,000 local, state, federal, and even international law enforcement agencies, it reported.
The Electronic Frontier Foundation says the FBI database could include millions of photos of people with no criminal record.
“Regarding facial recognition, there’s a knee-jerk reaction to becoming alarmed.” –Carl Szabo, policy counsel, NetChoice
“Once the technology is deployed and being used,” says Jennifer Lynch, an EFF senior staff attorney, “it becomes more difficult to restrict the use of the data that’s being collected.”
In a July report about facial recognition, the U.S. Government Accountability Office concluded that “the future trajectory of the technology raises questions about consumer privacy.” Lawmakers “need to adapt federal privacy law,” it said, to better protect consumers.
Right now, “it’s pretty much a Wild West, when it comes to the use of this technology,” says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. “It’s one thing to have dumb video cameras whose footage is stored and viewed only when there’s a crime,” he says.
It’s another thing, Stanley says, if people watching the cameras know “it’s you walking down that street and [are] mixing that with other data.”
Privacy vs. convenience?
The Department of Commerce’s National Telecommunications & Information Administration last year began hosting meetings with business and consumer privacy groups to form a code of conduct for using facial recognition. Despite aligned intentions, nine privacy and consumer groups this summer abandoned the NTIA process, citing a crucial disagreement.
“People should be able to walk down a public street without fear that companies they’ve never heard of are tracking their every movement—and identifying them by name—using facial-recognition technology,” several of the groups, including the EFF and Consumer Watchdog, said in a statement explaining their walkout. “Unfortunately, we have been unable to obtain agreement even with that basic, specific premise.”
Industry groups such as NetChoice, which represents Facebook, Google, and other companies using the technology, are still participating in the NTIA meetings. In Illinois, one of only two states that have laws governing facial recognition, Facebook faces litigation over its use of the technology.
“Regarding facial recognition, there’s a knee-jerk reaction to becoming alarmed,” says Carl Szabo, NetChoice policy counsel. But “there’s a litany of possible convenient uses for the technology.”
For example, Szabo says, when creating a family album for the holidays, he used the facial-recognition feature of a photo storage application to sift through thousands of pictures in seconds.
“Some people would argue that for me to use facial recognition on my private albums, I’d have had to obtain express consent from my wife and everyone else in those photographs,” he says. “That doesn’t make sense.”
NetChoice agrees with the NTIA that businesses using facial recognition should give consumers control over the sharing of their data and protect that data. But he warns against stunting development and adoption of facial recognition through excessive laws and regulations.
“The last thing we want is to stifle innovation unnecessarily,” Szabo says.