Facial Recognition Technology – Good or Bad?
By Cameron Abbott, Michelle Aggromito and Jacqueline Patishman
As of June 2019, law enforcement agencies are working with the city of Perth in running a 12-month trial in the use of facial recognition software. The trial involves the installation of the software in 30 CCTV cameras and is part of the Federal Government’s Smart Cities plan, which was created with the aim of increasing interconnectivity and building intelligent, technology-enabled infrastructure throughout Australia.
The software is able to detect clothing colour and gender and track movement speed and patterns, with some cameras also able to detect heat.
If the trial is successful, the new facial recognition technology may be installed across all cameras in Perth’s network.
Meanwhile in NSW, transport minister Andrew Constance has raised the idea of facial recognition software being used by commuters to access public transport, by linking it to their Opal account. Mr Constance suggested that it would create “frictionless transport payments” that may become available “in the not too distant future”.
In line with trends, it’s been proposed that the system would work on a subscription model like Netflix, whereby public transport users would pay a weekly or monthly fee for unlimited use.
Although this technology could potentially help NSW deal with the 4.7 percent increase in public transport commuters over the last year, the idea raises concerns. There may be issues with misidentification of commuters which would likely lead to inefficiencies, most likely at cost for the commuter.
The city of London has not had much success with facial recognition software for law enforcement. Following trials conducted by London Metropolitan Police, researchers from the University of Essex, who were given privileged access to the trials, found that members of the public were misidentified as potential criminals 80% of the time. This suggests that the technology may not be sufficiently mature to be useful in large scale applications where the multiplier effect of misidentification is significant.
Use of facial recognition software often raises questions about the ethics of artificial intelligence. As an example, San Francisco (with Silicone Valley as the AI capital) has banned the use of facial recognition software by police and other law enforcement agencies. The reason for the ban cites potential risks of abuse, although critics of the ban have commented that instead of a ban, the focus should instead turn to regulators and encourage them to find a way to balance the usefulness of facial recognition and prevent abuse of the technology.
As facial recognition and artificial intelligence technologies develop and become more prevalent in everyday activities, it will be increasingly important for technology providers, in conjunction with regulators, to consider, develop and adhere to ethical guidelines that prevent misidentification and abuse. At the same time the technology must be accurate enough to not create a burden on those affected by misidentification.