Grappling with various concerns over the accuracy, use and accelerating rollout of facial recognition technology, Democratic and Republican leaders on a House committee plan to introduce legislation soon related to the technology.

The forthcoming bill that Democrats are working on will be “comprehensive,” Rep. Carolyn Maloney (D-N.Y.), chairwoman of the Oversight and Reform Committee, said at the outset of a hearing exploring private sector uses of facial recognition technology. The bill and its markup are coming in the “near future” and the hope is it will be bipartisan, she said.

“We have a responsibility to not only encourage innovation but to protect the privacy and safety of American consumers,” Maloney said in her opening remarks. “That means educating our fellow members and the American people on the different uses of the technology and distinguishing between local subjective identification and surveillance uses. That also means exploring what protections are currently in place to protect civil rights, consumer privacy and data security. And prevent misidentifications as well as providing recommendations for future legislation and regulation.”

Maloney didn’t provide specifics on the upcoming bill, but Rep. Jim Jordan (R-Ohio), the ranking member on the panel, said Republicans are also working on a bipartisan bill aimed at the way the federal government uses the technology.

Of the bill Republicans are working on, Jordan said, “That will provide transparency and accountability with respect to the federal government’s purchase and use of this technology and this software.”

A GOP official with knowledge of the forthcoming legislation told Defense Daily later that,” Right now, it’s unclear whether current privacy and data laws sufficiently check the federal government’s use of the facial recognition technology. Any legislation put forth must create transparency, so that we evaluate and establish appropriate safeguards for future federal use.”

While much of the hearing did focus on the private sector’s use of face recognition systems, there was plenty of discussion about government uses of the biometric technology and concerns expressed by members of both parties that it could open the door to a police state.

“What we should seek is a means by which to make sure that Big Brother is not coming,” Rep. Clay Higgins (R-La.) said during the hearing, referring to the fictional surveillance state imagined in George Orwell’s 1949 novel, Nineteen Eighty-Four. Higgins said face recognition “is coming and it’s here,” and the accuracy will continue to improve.

Higgins said he and Jordan oppose live-streaming video of “free Americans” as they travel and enter businesses through a database “and all of a sudden the police show up to interview that guy.” But, he said, digital images have been used for years to solve crimes and remain “an important tool” here.

Rep. Alexandria Ocasio-Cortez (D-N.Y.) highlighted that face recognition technology is a “potential tool of authoritarian regimes” that could used by a state, highlighting China as an example, or large companies.

The hearing was the third held by the committee dating back to last May examining face recognition technology, with the first two meetings focused on reviewing the technology and its impact on civil rights and liberties, and how it is being used by federal law enforcement authorities. The Department of Homeland Security’s Customs and Border Protection agency is currently rolling face recognition technology out to verify the departure of travelers on international flights departing the U.S. and to record the entry of individuals entering the country.

The hearing also followed the release in December of a government report that demographics such as race, age, gender and country of origin make a difference in the accuracy rates of face recognition algorithms (Defense Daily, Dec. 19, 2019). Charles Romine, director of the Information Technology Laboratory at the National Institute of Science and Technology (NIST), which issued the report, during the hearing pointed out several times that the best algorithms don’t have the accuracy issues that other algorithms have and that the demographic effects with the better algorithms “were minimal.”

Romine said it’s important for the users of face recognition systems to understand the accuracy of the algorithms that they are acquiring and the context, noting that some use cases are relatively less risky, such as identifying a family member in a family photo, compared to others, like identifying a suspect where accurate identification is critical.

Asked by Rep. Jimmy Gomez (D-Calif.) about the harm in using algorithms that are biased against different demographics, Brenda Long, senior counsel and Director of Artificial Intelligence and Ethics at the Future of Privacy Forum, said that the large number of algorithms tested by NIST don’t represent their use in the market.

The “vast majority” of the technology being used by federal, state and local governments, and the private sector in venues such as stadiums and amusement parks, is the best algorithms, which are producing “very low error rates,” Leong answered. The problem is not knowing where the poor performing algorithms are being used and are “causing the most harm,” she said.

The Future of Privacy Forum bills itself as a think tank and advocacy group for leadership and scholarship around privacy to advance “principled data practices in support of emerging technologies.”