A biometrics pioneer’s plea for safeguards, privacy education (Q&A)
3 min read

A biometrics pioneer’s plea for safeguards, privacy education (Q&A)

A biometrics pioneer’s plea for safeguards, privacy education (Q&A)

One of the pioneers of the biometrics business has a cautionary message for the industry that made him wealthy: When it comes to personal privacy, there are certain red lines you shouldn’t cross.

In the early 1990s, Joseph Atick, a physicist by training, helped advance the ability of computers to recognize facial features. He led the biometrics company he founded, Visionics, through two mergers and then an acquisition in 2011 for $1.1 billion. He had previously led a computational-neuroscience laboratory at Rockefeller University and a neural-cybernetics group at the Institute for Advanced Study.

Just like J. Robert Oppenheimer, the legendary father of the atomic bomb who warned about the destructive power of the technology he helped create, Atick wants special safeguards to prevent the unfettered proliferation of the technology he helped build. Now serving as a consultant in the identity management industry, he is increasingly worried that a lack of societal sensitivity to privacy considerations might be paving the way for the abuse of biometric technologies.

The Parallax asked Atick why he’s taken it upon himself to urge colleagues in the field to restrain themselves. Here’s an edited transcript of our conversation.

Joseph Atick

Joseph Atick

The FBI’s Next Generation Identification database apparently accepts a 20 percent error rate on facial-recognition matches. Does that concern you?

We have to be cautious about interpreting these results. It’s similar to thinking about forensic fingerprints found at a crime scene. They can help investigators by providing a lead, but they don’t conclusively establish the identity of the criminal, and it doesn’t mean that they’ll convict somebody based on that evidence.

That said, oversight in any environment—especially when you’re using a powerful technology such as biometrics and face recognition—should be built into the program. This is a tool we cannot rely on without human judgment and without human supervision. We can tell you that someone is a potential suspect, but it doesn’t mean that the person is guilty. It’s up to the prosecution to prove the case.

How much regulation is too much?

It’s always preferable to start with industry self-regulation. Nobody knows better than the people who developed the technology the potential for abuse that we must avoid. Unfortunately, people can be shortsighted and say they want no recognition or oversight. That leads us to a point with no checks or balances. We need a framework that says there are certain things that you just can’t do with this technology. There are certain fundamental freedoms that we cherish and should protect.

Should there be a comprehensive federal law?

Not for face recognition, but given our new technological reality, we do need a federal law like Europe has that defines privacy. What reasonable expectations of privacy can we have, and what should we expect when we’re in different environments? We basically need a privacy bill of rights that isn’t prescriptive and would not inhibit innovation.

Do you expect companies to use biometrics to identify shoppers?

It’s a very tempting proposition for retailers. Look at the power of behavioral advertising online. We all live it. The holy grail of marketing is to develop profiles of individuals, then present them with what they need—even before they actually need it. There’s a ton of money to be made.

We trade privacy for convenience on the Internet. Will that also happen with biometrics?

Yes. Younger people don’t view sharing more and more of their private information online as a problem. So they say, What’s the big deal about sharing my biometric identity?

How did we get to this point?

There’s a lack of understanding about what it means to lose your privacy. Shame on us, if we fail to build protections around the use of new technologies.

Some say the answer is to prevent their deployment.

An outright ban of using biometric technology is not the answer. We have legitimate uses for the proper use of the technology. There is a legitimate reason why privacy and technology advocates need to be talking now, before a crisis. Now is the time to prevent crises.

I’m a technologist and an optimist, but I’m also a realist. As someone who directly contributed to the building of these technologies, I recognize the responsibility that I have to ensure they are not abused. We must not allow there to be a slippery slope because we’ve become desensitized.

Do you see the potential for abuse?

Yes, I do. But does that mean we should ban the technology? Absolutely not.  We need to educate ourselves about potential dangers. Maybe this will lead to some more head-scratching and thinking. Or maybe it will take something bad to happen for people to realize that it could happen to them too.

Enjoying these posts? Subscribe for more