Facial recognition

Facial recognition is often a crucial element in identifying the perpetrator of a crime. It can help to protect the public when used together with appropriate safeguards.

Following the recent use of the Clearview AI facial recognition tool by the Australian Federal Police (AFP), the Australian Information Commissioner and Privacy Commissioner Angelene Falk has determined that the AFP failed to comply with its privacy obligations in using the tool.

 

What is the facial recognition tool?

Clearview AI’s facial recognition tool allows law enforcement to upload a photo of a person’s face and match it to other photos of that person on the internet. A link is then provided to the source of the images.

According to Clearview, the platform “includes the largest known database of 10+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources.”​

 

How did the AFP use this tool?

Between 2 November 2019 and 22 January 2020, Clearview AI provided free trials of the facial recognition tool to members of the AFP-led Australian Centre to Counter Child Exploitation (ACCCE).

ACCCE members uploaded facial images of Australians to test the functionality of the tool, and in some cases, to try to identify persons of interest, and victims in active investigations.

 

The issue of privacy

Commissioner Falk found that the AFP:

  • Failed to complete a privacy impact assessment (PIA) before using the tool, in breach of clause 12 of the Australian Government Agencies Privacy Code, which requires a PIA for all high privacy risk projects.
  • Breached Australian Privacy Principle 1.2 by failing to take reasonable steps to implement practices, procedures and systems in relation to its use of Clearview AI to ensure it complied with clause 12 of the Code.

Commissioner Falk said that “There were a number of red flags about this third party offering that should have prompted a careful privacy assessment.”

“By uploading information about persons of interest and victims, the ACCCE were handling personal information in a way that could have serious consequences for individuals whose information was collected.”

Commissioner Falk considered that the AFP did not have the appropriate systems in place to ensure a coordinated approach to identifying high privacy risk projects. Further gaps were identified in the AFP’s mandatory privacy training, including insufficient information about conducting PIAs.

It was also determined that Clearview AI interfered with Australians’ privacy by scraping biometric information from the web and disclosing it through a facial recognition tool.

 

How will the issues be addressed?

Commissioner Falk has directed the AFP to:

  • Engage an independent assessor to review and report to the OAIC on residual deficiencies in its practices, procedures, systems and training in relation to privacy assessments, and make any necessary changes recommended in the report.
  • Ensure that relevant AFP personnel have completed an updated privacy training program.

 

Key takeaways

In its recent trial of the Clearview AI facial recognition tool, the AFP failed to meet legal requirements to complete a privacy impact assessment and take steps to ensure that the tool was used in compliance with Australian privacy codes and principles, thereby compromising public safety. The AFP has committed to addressing these issues.

Nyman Gibson Miralis provides expert advice and representation in complex criminal cases involving the use of technologies such as facial recognition.

Contact us if you require assistance.