njsbf new jersey state bar foundation logo a 501c3 non profit organization

Informed Citizens

are Better Citizens

by Michael Barbella

We all come into contact with facial recognition software everyday, sometimes without even knowing it. The technology is used in everything from unlocking iPhones to opening doors to paying for purchases.

Facial recognition technology has grown in popularity in recent years, becoming the preferred surveillance tool for police departments, airports, schools, retail outlets, sports venues, churches and government offices. Some airlines are using the technology to replace boarding passes and three arenas, including Madison Square Garden in Manhattan, are testing the use of face-scanning intelligence.

What is facial recognition?

Facial identification technology dates back to the 1960s and is based on research conducted by Woodrow Wilson “Woody” Bledsoe, a mathematician, computer scientist, and artificial intelligence pioneer. Intent on creating a “computer person,” Bledsoe developed facial identification technology through pattern recognition and facial feature coordinates. He essentially taught machines to divide a face into features, compare distances between those attributes, and then process the measurements to recognize a specific face.

Today’s technology uses the same theory, mapping such facial geometry as distance between the eyes and forehead-to-chin measurements to create a “facial signature.” The resulting mathematical formula is then compared to a database of known faces for a match. In a January 2020 Congressional hearing, Daniel Castro, vice president of the Information Technology and Innovation Foundation, explained how the technology works.

“The technology compares faces automatically, either by searching for similar faces in a database (one-to-many matching) or by verifying the degree to which two faces match (one-to-one matching),” said Castro. “In the former case, facial recognition tries to answer the question, ‘who is this person?’ and in the latter, it tries to answer the question ‘is this person who they say they are?’”

While the technology has proven useful in catching criminals and finding missing people, it has also brought up privacy issues and reports of bias baked into the tech.

Bias baked in

So, can facial recognition software show bias? A 2019 report from the National Institute of Science and Technology (NIST) revealed that it can. NIST tested approximately 200 facial recognition systems on a total of eight million photos. Their report revealed that African Americans and Asian Americans are between 10 and 100 times more likely to be misidentified by facial recognition technology than white people. In addition, women are more likely to be misidentified than men. This phenomenon is called “algorithmic bias.” Essentially, it means that because humans create algorithms, they are flawed and can carry the biases of the humans that created them.

“The bias is generally due to a lack of diversity in the training data,” says Ellen P. Goodman, a professor at Rutgers Law School—Camden who specializes in information policy law. “This is a rampant problem in algorithmic processes and needs to be addressed through self-regulation, ethics, audits, reporting and possibly regulation.”

In 2018, using photos of Congress members, the American Civil Liberties Union (ACLU) used Rekognition, facial recognition software developed by Amazon, to search a database of 25,000 mug shots. The software made 28 misidentifications, meaning the software came up with false positives and labeled the misidentified Congress members as criminals. Among the misidentifications were six members of the Congressional Black Caucus. Amazon contends that the ACLU set the confidence level on the software too low. The company recommends law enforcement set the level of confidence threshold to 99 percent. Critics of the software point out that Amazon could set the confidence threshold to that number and not allow it to be changed, but they don’t.

It’s not just Amazon. Studies have revealed flaws in facial recognition algorithms developed by IBM and Microsoft as well. Joy Buolamwini, a computer scientist and founder of the Algorithmic Justice League, evaluated artificial intelligence (AI) systems for Time magazine.

“The companies I evaluated had error rates of no more than one percent for lighter-skinned men. For darker-skinned women, the errors soared to 35 percent,” Buolamwini wrote. “AI systems from leading companies have failed to correctly classify the faces of Oprah Winfrey, Michelle Obama and Serena Williams. When technology denigrates even these iconic women, it is time to re-examine how these systems are built and who they truly serve.”

Dr. Donnetrice Allison, a professor of Africana Studies at Stockton University, compares AI misidentifications to false eyewitness testimony. “The bias will likely cause false recognitions, just as witness testimony has been found to be flawed when it comes to people of color.” Dr. Allison says she isn’t making a judgment on whether the technology should or shouldn’t be used, but she contends that “the criminal justice system is flawed and people of color endure the greatest miscarriages of justice as a result. I suspect this will only add to that fact rather than fix it.”

Outlawed in some states

The large variation in results—particularly among women and darker-skinned people—is fueling support for facial recognition regulations. Some states like California, New Hampshire and Oregon ban the use of face scanning and other biometric tracking technology in police body cameras. The cities of Oakland and San Francisco have banned the innovation outright within their respective city limits.

Michigan outlawed facial recognition technology in December 2019 and there is a statewide ban pending before the Massachusetts State Senate Committee on Public Safety and Homeland Security. Currently, four Massachusetts municipalities bar state government use of facial recognition technology.

Utah lawmakers have concerns about the technology as well, but they are not proposing to ban it. Instead, state officials want to limit its use to the state’s Department of Public Safety, which was criticized last year for employing facial recognition software (without warrants) on behalf of the Federal Bureau of Investigation and the U.S. Immigration and Customs Enforcement. The bill would regulate the Department’s use of face-scanning software by requiring police to submit a written request that includes a case number, a statement of the crime and a narrative to support that the subject in question is connected to the crime. In addition, police would not be allowed to use the technology for civil immigration violations.

Here in the Garden State, like many other law enforcement agencies nationwide, police officers were using the Clearview AI app, which accesses a database containing three billion photos collected from websites like Facebook, YouTube, Twitter and Venmo. In January 2020, New Jersey Attorney General Gurbir S. Grewal advised law enforcement to stop using the app.

“I’m not categorically opposed to using any of these types of tools or technologies that make it easier for us to solve crimes, and to catch child predators or other dangerous criminals,” Grewal told The New York Times. “But we need to have a full understanding of what is happening here and ensure there are appropriate safeguards.”

Within days of New Jersey’s order, New York State Senator Brad Hoylman introduced legislation to prohibit law enforcement officers from using facial recognition and other biometric surveillance technology in the course of their duties. The bill would also create a Task Force to study the issue and recommend standards for possible future use of the tool.

“The jurisdictions that ban the technology are acting in accordance with the ‘precautionary principle’ to slow things down until we know more about its uses and abuses,” says Professor Goodman. “However, I don’t think the bans will hold. The technology will be used and we need to put guardrails around it.”

On the federal level, the U.S. House of Representatives Algorithmic Accountability Act of 2019 stalled in the Energy and Commerce Committee. U.S. Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), and U.S. Representative Yvette D. Clarke (D-NY) drafted the legislation. The Act would have required companies to study and fix flawed computer algorithms that produce inaccurate, unfair, biased or discriminatory decisions.

“Computers are increasingly involved in the most important decisions affecting Americans’ lives—whether or not someone can buy a home, get a job or even go to jail. But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color,” Senator Wyden said when the legislation was introduced. “Our bill requires companies to study the algorithms they use, identify bias in these systems and fix any discrimination or bias they find.”

There is currently no word on whether the legislation will be re-introduced at some point. The House’s Oversight and Reform Committee is hoping to introduce facial recognition legislation in the “near future.”

Discussion Questions

  1. How do you feel about facial recognition software in general? Do you view it as an invasion of privacy? Why or why not?
  2. What do you think of prohibiting the use of AI technology when investigating crimes? What are the benefits? What are the downfalls? Should its use be unlimited?
  3. When facial recognition software contains bias, what potential problems do you see since the use of it is so widespread?

Glossary Words
algorithm: a process or set of rules to be followed in calculations or other problem-solving operations.

This article originally appeared in the spring 2020 edition of Respect.