Back to Blog

AI Meets Strong Customer Authentication

Published: 16.02.2023

Updated: 16.02.2023

Author: Erik Vasaasen

Artificial Intelligence (AI) plays a crucial role in Strong Customer Authentication. Its algorithms can analyse customer behaviour patterns and detect anomalies, such as unusual spending patterns or locations while enabling financial institutions to automate the authentication process for speed and convenience. However, AI doesn’t come without its fair share of consumer rights issues - let’s get into it!

Artificial intelligence is revolutionising the banking industry, with applications ranging from fraud detection to personalised financial advice. However, as AI becomes more prevalent in the banking sector, both governments and privacy advocates have raised concerns regarding how it interacts with the PSD2s consumer rights provisions and Strong Customer Authentication (SCA) regulation. As a provider of SCA solutions, we in Okay often have to discuss how the data we provide for each authentication is used.

The Bias Issue

A fundamental problem is that machine learning algorithms must be trained on real data and real decisions, typically made by human experts. This means that any hidden biases in those decisions will also be part of the training material. An example of such a bias is name discrimination, which is the tendency for an applicant’s name to significantly impact the application’s success. Unfortunately, using AI on real training data can make this even worse, as the result of the AI evaluation might not indicate that such a hidden bias had an impact.

Its Effect on Security

While it is evident that basing real decisions on “black box” decisions made by AI based on lacking training data should be avoided, it also impacts security. For example, SCA is a key component of the European Union’s revised Payment Services Directive (PSD2), enacted in January 2018. To summarise, the regulation aims to increase security and protect consumers from fraud by requiring two-factor authentication for online transactions. This includes the use of something the customer knows (such as a password), something the customer has (such as a mobile phone), and something the customer is (such as a fingerprint).

So, one of the main challenges for banks is how to use AI in conjunction with SCA without compromising the security and transparency of the authentication process. This is particularly relevant when it comes to using AI to make decisions about whether to accept or decline a transaction. Banks are generally required to explain the decision-making process clearly and transparently to the customer. As such, when using AI as part of the process, we get Explainable AI, also known as XAI. XAI is a subfield of AI that focuses on developing systems that can provide explanations for their decisions.

Why is XAI Important?

In the context of banking, this is important for several reasons. First and foremost, banks are held to a high standard regarding the security and privacy of customer data. By using explainable AI, banks can demonstrate that they are using customer data in a transparent and fair manner, helping build trust with customers and regulators. 

Second, explainable AI can help banks comply with regulations such as the European Union’s General Data Protection Regulation (GDPR) and the PSD2. Outside of Europe, the New York Department of Financial Services (NYDFS) Cybersecurity Regulation also has similar provisions. These regulations require organisations to demonstrate that they are responsibly using customer data and that they can explain the decisions being made.

Sign Up for Our Newsletter

Unlock updates, insights, and exclusive content delivered to you.

XAI and XANN in Action

For example, suppose a bank uses AI to analyse a customer’s behaviour and decide whether to decline a transaction based on suspicious activity. In that case, it must be able to explain the reasoning behind the decision. This helps ensure that the decision is not a “black box” and that customers can understand how and why their transaction was declined. 

Additionally, the bank should be able to demonstrate how the system handles false positives and negatives, ensuring that the AI models used for decision-making are validated and that the decisions are auditable. This will help to build trust in the system and ensure that customers feel secure when using online banking services.

One way this can be done is to use Explainable Artificial Neural Networks (XANN), which enables human interpretable explanation of the decision-making process of neural networks. This can be done by highlighting the parts of the input that lead the network to make certain predictions, and highlighting the important features of the input that lead the network to make certain predictions.

Overall, AI can revolutionise the way banks use AI by enhancing trust and transparency. It can also improve technology’s effectiveness and help banks comply with regulations. Ultimately, as the use of AI continues to grow in the banking industry, it will be necessary for banks to invest in explainable AI to ensure that customers and regulators have confidence in the technology.