Artificial intelligence a powerful but double-edged sword for risk managers
By Philippe Sarfati
January 24, 2020
Artificial Intelligence (AI) has the potential to reshape our world. It has transformed chess competitions, car driving, image recognition, language translation and more, opening our imagination to the possibilities. Financial institutions big and small are seeing AI’s potential benefits.
AI is a computer system that interprets information from its environment, processes it and applies it to reach a goal. AI will play an increasingly prominent role in financial risk management, a field that involves data science and intensive data modelling.
AI can process unstructured information from text files and social communications to allow financial institutions to gain deeper insights into their customers and make smarter decisions. AI is already being applied in banking; e.g., to predict borrower default, generate money laundering alerts and detect credit card fraud.
With the rapid advancement in AI algorithms, big data science and computational capability, AI can be used to create sophisticated tools to monitor and analyze the behaviour and activities of customers in real time. As AI tools adapt to a changing risk environment, they have the power to continually enhance financial institutions’ risk monitoring capabilities.
However, AI is a double-edged sword. It can benefit a financial institution but may also introduce model risk, legal risk, operational risk and reputational risk. Financial institutions seeking to use AI must first find proper ways to manage these risks.
Among the various challenges with the use of AI, its black box nature and overfitting models are of most concern. The behaviour of a black box system is observed only from its inputs and outputs. Overfitting can happen when AI models too tightly tie data samples to a model and introduce bias if the data samples are not representative. AI models may be technically complex and not transparent, so financial institutions must find solutions to interpret AI models into their business context.
In addition, AI models may use customers’ information in a way that is prohibited by regulators. Banks using AI should be aware of this compliance risk and be cautious about decision-making around AI models and techniques.
Prudential supervisors are aware of the substantial implications of AI to the financial industry and to themselves. AI has the potential to help supervisors identify potential violations and better anticipate the impact of changes to regulations. The Office of the Superintendent of Financial Institutions (OSFI), which is the prudential supervisor in Canada, is closely monitoring the use of AI models by Canadian financial institutions. OSFI plans to bring governance of AI models into the scope of its Guideline E-23 on Model Risk Management for Deposit-Taking Institutions.
At Concentra Bank, we intend to apply AI models and analytics to advance our business and our risk management because we see tremendous potential in AI. At the same time, we will apply our Enterprise-Wide Model Risk Management Guideline to govern our AI models and protect our customers and our bank.
As Chief Risk Officer, Philippe Sarfati ensures sound governance and effective controls for enterprise risk management, compliance, corporate policy, credit adjudication and regulatory framework to meet Concentra’s strategic and business objectives while enhancing shareholder value. He provides strategic leadership to support a prudent risk culture focused on growth, development, and market relevance.
For more information, contact:
1.800.788.6311 | CommercialMarkets@concentra.ca