Insurance

The “Coded Gaze” of Facial Recognition Technology, Discrimination Lawsuits, and Your Insurance Program

J. Wylie Donald of McCarter & English discusses insurance coverage of potential discrimination claims in relation to facial recognition technology.

A coded gaze is a coded gaze is a coded gaze. With apologies to Gertrude Stein – see, e.g., rose is a rose, etc. – she never bought insurance for artificial intelligence applications. Specifically, she never had to concern herself with claims of racial discrimination arising from the adoption of facial recognition technology.

What are we talking about here? Is it really possible that surveillance cameras, scanning technologies or security protocols could lead to a race discrimination claim? It is just software, right? As one metropolitan police department’s website claimed: facial recognition simply “does not see race.”[1] Ones and zeros, right? The camera takes a picture, the software finds a match. No directions, no commentary, no words at all.

Blind faith in the code’s lack of bias is exactly that: blind. Facial recognition technology carries a bias that is built into its core. Looking after your business requires that you understand the technology before you adopt it, and understand the risks you may be assuming. You need to confirm that your insurance programs pick up claims you have never seen before. The best way to do that, of course, is to consider the potential claims beforehand. So how does facial recognition technology work? How does racial bias enter the mix, and how may insurance address a discrimination claim if one arises?

Facial Recognition Technology

You may be familiar with facial recognition technology (also called facial analysis) such as Facebook photo tagging,[2] your Enhanced Digital Driver’s License,[3] or your iPhone’s Face ID.[4] You may have experienced other technology in the workplace. The Government Accountability Office reviewed facial recognition technology in 2015 (GAO Report) and identified various ways the technology was being used (or was anticipated to be used):

photographs of individuals who walk into a store are compared against a database of images of known shoplifters, members of organized retail crime syndicates, or other persons of interest; matches are forwarded to security personnel/management for action;

security systems at financial institutions use facial recognition systems to identify robbery suspects or accomplices;

security systems unlock facilities, personal computers, and other personal electronics after a user’s identity is confirmed through facial recognition; and

workplace time and attendance is confirmed using facial recognition. [5]

What is probably less familiar is how it works. First we need a camera – technology that is almost two centuries old. Next we need some code – which is of more recent vintage. We have to be able to identify a face in a photograph and make sure it is normalized (scaled, rotated and aligned) so that all faces in the program are the same size and in the same position. Then we have to measure that face in some way. Some researchers use a collection of geometric shapes imposed on the face, while others use a standardized face approach and measure deviation from that standard. Another group focuses on details of skin texture (e.g., pores and hair follicles). Using one or all of these techniques, a faceprint is created. A single faceprint is not particularly helpful, so we need to find a database full of faces. We could use FBI mugshots or state drivers licenses. Facebook would be a treasure trove. Then we need one more bit of code, an algorithm to compare the subject faceprint to the faceprints of all those faces in the database.

Obviously, the algorithm will not be perfect. A face might be frowning or smiling, might be wearing glasses or a hat, might be in bad lighting, or might be your sister. So the comparison’s output will be a numerical score showing how similar two faces are (based on the methodology of the algorithm). As stated by researchers at Georgetown University Law Center, “Face recognition is inherently probabilistic: It does not produce binary “yes” or “no” answers, but rather identifies more likely or less likely matches.”[6]

Bias in Facial Recognition

Identifying what is actually a face might seem simple but, as MIT researcher and coding activist Joy Buolamwini has visually demonstrated,[7] it is not as easy as one might think. Some face recognition code does not recognize all faces. Ms. Buolamwini collected three specific examples from her own experience where her face was not recognized as a face.[8]

Ms. Buolamwini is black, and she posits that the reason her face was not recognized is that the “training set” on which the facial recognition program is based is not representative.

Commonly used face detection code works in part by using training sets — a group of example images that are described as human faces. … The faces that are chosen for the training set impact what the code recognizes as a face. A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set.[9]

Ms. Buolamwini coined the term “Coded Gaze” to refer to the bias in coding algorithms. She describes the Coded Gaze as the “embedded views that are propagated by those who have the power to code systems.”[10]

Other research confirms Ms. Buolamwini’s perspective. A 2012 FBI study looked at facial recognition algorithms and determined that those investigated were 5% to 10% less accurate for black Americans than white Americans. “To be more precise, African Americans were less likely to be successfully identified—i.e., more likely to be falsely rejected—than other demographic groups.”[11] As the Georgetown researchers noted: “training is destiny; the faces that an algorithm practices on are the faces it will be best at recognizing.”[12]

Ms. Buolamwini followed up on her experiences and put her research skills to work. In February 2018 she presented a paper at the Conference on Fairness, Accountability and Transparency. There she reported that the facial recognition systems of three commercial products had error rates between 20.8% and 34.7% on darker-skinned females.[13] Her thinking was most recently presented at the World Economic Forum in Davos, Switzerland this past January.[14]

An additional reason for the lack of accuracy is, to quote a technologist at a facial recognition software company, “when you have people with very dark skin, you have a lower dynamic range, which means that it’s much harder to capture high-quality images . . . This is one reason why the performance on black subjects has typically been worse.”[15]

Potential Discrimination Claims

So, we know there can be racial bias in facial recognition technology products. Could such products lead to discrimination lawsuits? Two causes of action immediately come to mind: misidentification and failure to hire.

Misidentification

Drawing on the GAO Report, one can imagine that a darker-skinned individual misidentified by security using facial recognition technology as being a shoplifter or a robbery suspect might assert a claim. For example, in a pre-facial recognition technology case, Lewis v. Farmer Jack Div., Inc.,[16] five months after a robbery at the defendant’s store, a store employee identified an individual as the robber when she saw the alleged robber again at another store operated by the defendant. Charges were brought but later dropped based on a determination that the alleged robber had been misidentified. The misidentified alleged robber brought suit and a jury awarded the exonerated robber/plaintiff $40,000 (in 1982 dollars). The trial court refused a directed verdict for the defendant; the court of appeals reversed. On further appeal, the Michigan Supreme Court held, in a detailed opinion, that:

the jury could have reasonably determined that, in summoning the police to the store and telling them that an armed robber was present, describing that person to the police and pointing him out with the unequivocal and accusing language, " 'Yeah, that's the one' ", the defendants, through their employees, were in effect ordering the police officers to do their duty and arrest plaintiff.[17]

Accordingly, the Court held that “a person who instigates a police officer to make an arrest of an innocent person is liable in the same manner as if he had made the arrest himself.”[18] The Court then remanded the case for further consideration.

This is not to say that erroneously reporting a suspected criminal will lead to liability. Most jurisdictions disfavor negligent misidentification claims. As stated by the Ohio Supreme Court, where it refused to recognize negligent misidentification as a valid claim: “Public policy favors the exposure of crime.”[19] Nevertheless, while negligent misidentification of an alleged lawbreaker arising from bias in a facial recognition program may not suffice to establish a claim, such a claim may be joined with, among others, discrimination claims, claims for false arrest and false imprisonment, and claims for malicious prosecution and defamation. Even if such claims are not sustained, the costs of a defense may be significant.

Failure to Hire

An employer may determine that he wishes not to employ white nationalists, antifas, pro- or anti-gun activists, right-to-life marchers, right-to-choose marchers, etc. He wants a workplace where there is no confrontation, and determines to scan local rallies with facial recognition technology to screen out activist applicants. A darker-skinned individual applies and is unable even to get a job interview at the company. That individual will have no difficulty finding a law firm to take his or her case; a simple internet search turns up dozens of law firms willing to pursue wrongful failure-to-hire claims.[20]

The elements of a wrongful failure to hire claim based on race are relatively simple. As stated by a Michigan court of appeal considering the Michigan Civil Rights Act: “the applicable elements of a prima facie case of racial discrimination in the context of a refusal to hire [are that t]he Plaintiff has the burden of proving the following: (1) That he belonged to a racial minority; (2) That he applied and was qualified for a job for which the employer was seeking applicants; (3) That, despite his qualifications, he was rejected.”[21] Federal requirements are similar: “(1) plaintiff is a member of a protected class; (2) plaintiff met applicable job qualifications; (3) despite qualifications, adverse employment action was taken against him; and (4) the action occurred in circumstances giving rise to an inference of discriminatory motivation.”[22]

Insurance Coverage

Would the above claims have coverage? Does the fact that they are based on facial recognition technology make any difference?

To make an accurate determination, we would need to know much more. We can say, however, that there are fruitful places to look. First, the misidentification claim will lead many directly to the company’s Directors and Officers (D&O) policy, which will cover Wrongful Acts, a broadly defined term, for example: “any actual or alleged act, error, omission, neglect, breach of duty, breach of trust, misstatement, or misleading statement by an Insured Person in his or her capacity as such, or any matter claimed against an Insured Person by reason of his or her status as such.” The broad reach of that definition may also protect the Insured Entity, but there is the rub. Often a policyholder does not buy entity coverage, usually because of the cost.

There may also be exclusions. For example, this language was taken from a D&O policy:

there shall be no coverage for claims “alleging, arising out of, based upon, or attributable to any actual or alleged discrimination; harassment; retaliation (other than a whistleblower claim or a claim for retaliation …); wrongful discharge; termination; or any other employment-related or employment practices claim, including but not limited to any wage/hour claim or any third-party discrimination or harassment claim.”

Your coverage counsel may have strong arguments that the excluded discrimination here includes only employment-related claims, but the carrier may argue that the exclusion applies to “any” discrimination. You should be aware of, and correct, this potential issue before the claim arises.

Even if, however, your D&O policy turns out to be barren, there may be succor found in your company’s general liability policy. Such policies generally include coverage for personal and advertising injury, which in the typical definition includes such wrongs as false arrest and defamation. Since the predicate for this claim is the wrongful detention of an innocent party because of bias in the facial recognition systems, it is likely that the innocent party, if he or she made a claim, would assert a claim for false arrest and thus trigger the general liability coverage.

Returning to the D&O policy, it is probable that it contains an exclusion for employment-related claims. That is perfectly fine so long as your company also procures an Employment Practices Liability Insurance (EPLI) policy. Such a policy likely would cover the wrongful failure to hire claim. Here is an example of discrimination coverage provided by an EPLI policy:

“Discrimination” means any violation of employment discrimination laws, including but not limited to any actual, alleged or constructive employment termination, dismissal, or discharge, employment demotion, denial of tenure, modification of any term or condition of employment, any failure or refusal to hire or promote, or any limitation or segregation of any Employee or applicant for employment by the Company in any way that would deprive any person of employment opportunities based on such person's race …

Note that the source of the discrimination is irrelevant to coverage; any violation of employment discrimination laws is covered.

We make this sortie into three different policies not to resolve with finality the issue of coverage for facial recognition discrimination claims. Rather, it is to make the point that coverage depends strongly on the specific facts (e.g., is the discrimination against an employee or a non-employee?) and the specific terms of the policy (e.g., does the D&O policy cover the Insured Entity or only the Insured Persons?). With a new technology with unknown claim potential, you need to fully investigate the potential for claims … and the potential for coverage.

[1] Seattle Police Department, Booking Photo Comparison System FAQs, Document p. 009377, quoted in C Garvie, A Bedoya & J Frankle, Georgetown University Law Center, Center on Privacy & Technology, The Perpetual Lineup, Unregulated Police Face Recognition in America (“Perpetual Lineup”) at n.215 and accompanying text (Oct. 18, 2016), available at https://www.perpetuallineup.org/findings/racial-bias#footnote215_1smdx93 .

[2] https://www.facebook.com/help/124970597582337/ .

[3][3] See Wojtkoviak v. N.J. Motor Vehicle Comm’n, 106 A.3d 519, 523 (N.J. App. Div. 2015) (describing photographic comparisons used).

[4] https://www.pcmag.com/feature/357318/how-to-set-up-and-use-face-id-on-the-iphone-x .

[5] Government Accountability Office, Facial Recognition Technology at 8-9, GAO-15-621 (July 2015).

[6] Perpetual Lineup n.8.

[7] Code4Rights, Code4All, Joy Buolamwini, TEDxBeaconStreet (Dec. 13, 2016), https://www.youtube.com/watch?v=lbnVu3At-0o .

[8] Id.

[9] Joy Buolamwini, InCoding – In the Beginning (May 16, 2016), https://medium.com/mit-media-lab/incoding-in-the-beginning-4e2a5c51a45d (emphasis in original).

[10] Id.

[11] Perpetual Lineup n.223 and accompanying text.

[12] Id. n.226 and accompanying text.

[13] Joy Buolamwini, Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification at 9, Table 4, 81 Proc. Mach. Learning Res. 1 (2018). Notably, one of the tested systems was modified after Ms. Buolamwini’s report and achieved a 96.5% accuracy on darker-skinned females. Parmy Olson, Racist, Sexist AI Could be a Bigger Problem than Lost Jobs, Forbes, Feb. 26, 2018, available at https://www.forbes.com/sites/parmyolson/2018/02/26/artificial-intelligence-ai-bias-google/#c5b7a11a0158

[14] Joy Buolamwini, Compassion through Computation: Fighting Algorithmic Bias, World Economic Forum (Davos, Switzerland Jan. 23, 2019), https://www.weforum.org/events/world-economic-forum-annual-meeting/sessions/compassion-through-computation-fighting-algorithmic-bias# .

[15] Id. n.228.

[16] 327 N.W.2d 893 (Mich. 1982)

[17] Id. at 906.

[18] Id. at 909-10.

[19] Foley v. Univ. of Dayton, 81 N.E.3d 398, 401 (Ohio 2016); also id. (identifying other jurisdictions that reject negligent misidentification claims) (citing Ramsden v. W. Union, 71 Cal. App. 3d 873, 881 (1977); Lundberg v. Scoggins, 335 N.W.2d 235, 236 (Minn. 1983); Campbell v. San Antonio, 43 F.3d 973, 981 (5th Cir. 1995); Shelburg v. Scottsdale Police Dept., 2010 WL 3327690, *11 (D. Ariz. Aug. 23, 2010); Jaindl v. Mohr, 661 A.2d 1362 (1995)); Morris v. T.D. Bank, 454 N.J. Super. 203, 211-212 (App. Div. 2018) (same).

[20] https://bjtlegal.com/practice-areas/employment-law/ (“Under state and federal law, it is unlawful for an employer to discriminate against an employee because of … race,…. ‘Discrimination’ includes any adverse action such as failure to hire …”); https://www.schwartzlawfirm.net/discrimination-and-harassment/ (““Unlawful discrimination can take many forms. The most overt are failure to hire, ….”).

[21]Smith v. Charter Tp. of Union, 575 N.W.2d 290, 292 (Mich. Ct. App. 1998) (citations omitted).

[22] Young v. Warner-Jenkinson Co., Inc., 990 F. Supp. 748, 751 (E.D. Mo. 1997).

Published .