AI (Adverse Inferences): AI Lending Models may show unconscious bias, according to Report.
By Cameron Abbott and Max Evans
We live in an era where the adoption and use of Artificial Intelligence (AI) is at the forefront of business advancement and social progression. Facial recognition technology software is used or is being piloted to be used across a variety of government sectors, whilst voice recognition assistants are becoming the norm both in personal and business contexts. However, as we have blogged previously on, the AI ‘bandwagon’ inherently comes with legitimate concerns.
This is no different in the banking world. The use of AI-based phishing detection applications has strengthened cybersecurity safeguards for financial institutions, whilst the use of “Robo-Advisers” and voice and language processors has facilitated efficiency by increasing the pace of transactions and reducing service times. However, this appears to sound too good to be true, as according to a Report by CIO Drive, algorithmic lending models may show an unconscious bias.
The Report indicates that the basis for this concern is multi-faceted. Machine-learning underwriting models may have too many data points by which to measure potential borrowers or customers, which may in turn make critical adverse inferences based on a user’s social media accounts, online purchases and web browser history. Additionally, the lack of isolation between data points and signals mean that financial institutions often only know the particular decision rendered by the specified AI model, not the basis for making that decision.
We agree with the Report’s inference that regulatory guidance on the use of AI by financial institutions is required to address these concerns. However, as noted by Australian Prudential Regulation Authority board member Geoff Summerhayes, the use of AI algorithms is an “emerging risk within an accelerating risk” which is not yet fully understood by the industry or regulators. There is little doubt AI engines are an essential way of avoiding some forms of human bias and delivering services to large numbers of people with lower value transactions, but their design must be fully understood.