The accuracy of face recognition algorithms used on photos of people that have been altered with a digitally created mask declines even with the best commercial algorithms used in one-to-one matches, the National Institute of Standards and Technology (NIST) says in a new study.
The algorithms tested by NIST were mostly developed before COVID-19 hit and not with masked faces in mind. A later evaluation will assess software developed for masked faces.
NIST says that the best of the 89 algorithms it tested had error rates between 5 and 50 percent in matching digitally applied face masks with photos of the same person without a mask.
“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” Mei Ngan, a computer scientist with NIST and author of the new report, said in a statement.
The report, Ongoing Face Recognition Vendor Test (FRVT) Part 6A: Face recognition accuracy with masks using pre-COVID-19 algorithms, anticipates that going forward there will be greater demand to match faces without people removing their masks.
“This presents a problem for face recognition, because regions of the face occluded by masks—the mouth and nose—include information useful for both recognition and, potentially, the detection state that precedes it,” the 58-pagye study says. The more a mask covers a person’s nose, the less well algorithms perform, it says.
The main findings in the report include that the most accurate algorithms for one to one matching typically fail to authenticate about 0.3 percent of persons without masks and rises to about 5 percent with the highest coverage mask.
“This is noteworthy given that around 70 percent of the face are is occluded by the mask,” the study says.
Other algorithms that are also very accurate with unmasked faces failed to match between 20 and 50 percent of the images, it also says.
On the other hand, the testing shows that the algorithms don’t typically provide a match between the masked images of different people and, in fact, false positives declined a bit.
“The modest decline in false positive rates show that occlusion with masks does not undermine this aspect of security,” NIST says.
Another finding is that algorithms going up against masked faces had more difficulty processing, which meant it couldn’t do an effective comparison.
Features of a mask also impact accuracy, the report says. For example, wider masks cover more of the face than rounder N95 masks and “generally give false negative rates about a factor of two higher than do rounder type masks,” it says.
The color of a mask also matters. The testing used light-blue and black masks and found that most algorithms do worse with the black masks.
“The reason for observed accuracy differences between mask color is unknown but is a point for consideration by impacted developers,” the report says.
The study used 89 algorithms and a dataset of 6.2 million photographs.
In addition to evaluating algorithms developed to account for masked faces, NIST says future studies will also evaluate how well algorithms perform in matching one person’s face against a database of multiple faces.
Last December, NIST published an FRVT report showing that demographics produce different rates of accuracy for algorithms based on factors such as sex, age, race or country of birth. However, the best performing algorithms had very low error rates, it said.