We increasingly are seeing artificial intelligence (AI) in our everyday lives, from speech-to-text features on our smartphones to voice-control digital assistants that can order groceries to the profiling and customization that lie behind a growing number of our commercial interactions. Unlike rules-based processes, such as robotic process automation (RPA),1 AI algorithms can perform human-like tasks, such as applying new information to novel data sets (Note: Robotic process automation is most often used to perform repetitive, routine tasks. Recently, the technology has started to evolve to include AI and machine learning to become intelligent process automation). The field of AI also has produced a number of cognitive technologies, including machine learning applications, such as natural language processing (NLP) and natural language generation (NLG), which allow AI systems to analyze data and make more-informed real-time decisions (Note: Machine learning refers to the ability of computers to learn a task that it is not specifically programmed for by identifying patterns in data and applying what it has learned from that data to new data or to draw inferences from data sets. Natural language processing refers to a computer’s ability to understand written and spoken language. Natural language generation refers to computer software that turns data into written narrative).
According to PricewaterhouseCoopers’ research, the use of AI could lead to an increase in global gross domestic product of up to 14% – the equivalent of an additional $15.7 trillion – by 2030. The financial services industry, with its access to large data sets on customers and markets, is set to experience some of the biggest gains. In fact, many banks are already investing heavily in these technologies. In 2017 about half (52%) of the industry confirmed they were currently making substantial investments in AI, and 66% said they expect to do so within the next three years.
The use cases for AI are constantly changing, but banks today are focusing on three main applications: (1) building a better customer experience; (2) reducing costs, not head count; and (3) streamlining risk operations. However, although AI can provide banks with many advantages, it also presents new challenges. Therefore, banks will have to balance the complex and often opaque nature of AI’s algorithms with demands from customers, leadership, and regulators for transparency and security.
Building a Better Customer Experience
The financial services industry has experienced a significant shift in customer expectations over the past few years. Once limited to the industry’s traditional set of one-size-fits-all products and services, customers now expect tailored offerings and seamless digital purchasing experiences. This trend is only growing stronger as comfort with digital banking grows and disposable income shifts toward younger, more digitally fluent customers. Traditional banks are therefore starting to embrace the digital transformation, and those that go further in adopting AI systems will have superior insight into their customers’ needs and preferences, giving them the ability to better tailor their offerings and attract new customers.
Banks can use AI to customize products and advertising based on need, life stage, and behavior by combining information from transaction histories, prior inquiries, geolocation, customer search history, and even social media sites. For instance, a bank could analyze a customer’s transaction history to recognize an airline preference and then send a notification via the bank’s mobile app to inform the customer of a new credit card partnership with that airline.
Chatbots are one of the most visible forms of AI that banks are using today. The technology uses machine learning algorithms to study and mimic how customer service agents diagnose and solve customer problems. Chatbots are often used to help customers make payments, check their balances, transfer money, and even suggest products based on particular spending habits. The software learns from structured and unstructured data, such as resolved issue logs and call transcripts, as well as recommendations and corrections made by programmers. Chatbots can then take that data and apply it to new customer inquiries, improving the speed and quality at which customers get their questions answered.
Not only does the use of chatbots eliminate the need to wait for agents to help with basic inquiries, but agents will be free to focus on more-complex customer issues that need a human touch. Chatbots should, for example, be programmed to transfer calls to a human operator in situations of emotional distress, such as customers detecting fraud in an account or calling to report a death. Humans are also better able to handle situations where there may be an opportunity for a potential up-sell or cross-sell, such as a loan request or credit card inquiry.
With the ability to better understand their customers’ behaviors, banks will be able to improve the customer experience by providing faster, more-personalized service and more-tailored product offerings.
Reducing Costs, Not Head Count
Reducing Costs
Historically, banks have focused on driving down costs through lean operations and outsourcing, but in recent years many have begun to further reduce costs by using RPA for low-value-add activities. As technologies have matured and banks’ comfort levels with emerging technologies have increased, they are now turning to AI to help reduce time and resources spent on a growing number of processes.
Banks often have relied on large internal teams as well as third-party legal and professional services firms to determine the applicability of regulations and identify gaps between regulatory requirements and internal policies and procedures. However, some banks are now leveraging NLP to do this work as well as to test for ongoing regulatory compliance. Previously, this testing often required humans to manually read documents, such as loan applications, but by using optical character recognition (OCR) physical documents can be digitized and AI can evaluate them for compliance with relevant regulatory requirements (Note: OCR is the conversion of different types of documents, such as scanned handwritten text, PDF files, or images into editable and searchable data). For instance, OCR will be able to determine from a loan application whether notices related to a loan, such as approval or a counteroffer, were sent within the required time period. This has helped to reduce the need for third-party vendors and freed up internal compliance staff to focus on reviewing and remediating gaps identified by the AI systems.
In addition, one large bank is using intelligent process automation (IPA), which applies AI to RPA, to analyze documents, such as commercial-loan agreements, in an effort to extract important data points and clauses (Note: Like other forms of AI, IPA can perform tasks in a human-like manner as well as learn and improve over time. Though still a rule-based system, unlike RPA systems, IPA systems can make decisions). This would reduce the amount of time required to analyze such documents from hours to seconds, significantly reducing both internal and external legal costs. Further, both IPA and OCR have the added bonus of not requiring human oversight, so such processes can further enhance efficiency and reduce costs.
Not Necessarily Reducing Head Count
Although AI does reduce the need for human capital in certain areas, the purpose of this technology is not to replace bank employees. Instead, AI is designed to augment employees’ intelligence by reducing the time it takes to sift through massive amounts of data, helping them to make better decisions with fewer errors, while providing improved and faster service to customers. In fact, human-supervised AI often achieves more accurate results than unsupervised AI.
A recent PwC survey shows that 62% of consumers believe AI can reduce the time it takes to get answers tailored to their preferences, and 43% of millennials would pay a premium for a hybrid AI system that also offers direct access to humans. A hybrid system, for example, would be beneficial in instances where a human touch is necessary to deliver the best customer experience, such as when a customer has a particularly complex or sensitive matter to discuss, as described above.
However, just because AI will not replace employees does not mean the workforce won’t change. Many AI tools require an understanding of which algorithms will work best for a particular problem and a particular data set. Therefore, banks will need employees who have technical backgrounds in data science and data visualization who are familiar with training AI, know which questions to ask the system, and understand what additional data is needed in order to bring the most value to the business. Although this will at times require banks to recruit outside talent, it also means investing in retraining existing staff.
Streamlining Risk Operations
AI applications are able to analyze and organize vast amounts of unstructured data in order to derive insights, making them particularly well-suited for assisting with risk functions. For instance, generating risk and regulatory compliance reports is a labor-intensive process, but utilizing NLP to analyze text and NLG to distill that analysis into narratives allows banks to quickly transform data and efficiently present risk insights in real time. These features are particularly useful for CFOs, chief risk officers, and chief compliance officers in developing more useful and more-value-added information for key constituents such as boards, senior management, and regulators.
One of the primary risk areas where we have seen banks adopt AI is in their anti-money laundering (AML) compliance. Many banks previously used RPA for AML compliance but found that RPA is not sensitive or sophisticated enough to detect important factors, such as changes in patterns or behavior, or tone and sentiment in conversation. As a result, many forward-thinking banks have employed regulatory technology (RegTech) solutions that utilize NLP to surveil electronic and audio communication and flag any suspiciously used terms or phrases while accounting for tone and sentiment.
RegTech solutions also use AI-enabled advanced analytics to aggregate and analyze data, allowing investigators to visualize complex financial relationships and more easily detect AML violations. For example, in order to better monitor accounts for suspicious activity, one bank used an AI system to build a holistic view of its customers, including their relationships and transactional behavior. By studying complex details such as specific high-risk AML transaction patterns, including the strength of a customer’s relationship with a merchant and their product portfolio, the system was able to improve the accuracy of alerts. It also identified patterns in customer attributes that can lead to an increased AML risk like a sudden increase in international transactions to high-risk countries. The bank could then use this information to refine the rules it used to generate alerts, increasing the number of investigations completed each day and further helping to reduce the number of false positives.
In addition, banks are also adopting AI to flag patterns that most often lead to regulatory inquiries. The system analyzes past compliance issues and learns to recognize similar patterns in the bank’s current data and alerts the compliance team to potential issues as they occur, thereby reducing regulatory risk by giving banks the opportunity to remediate issues before they self-report to the regulators [or the regulators themselves discover potential issues].
AI: A Risky Business?
Cybersecurity
As with every new and emerging technology, especially those that require collecting personal data, adopting AI comes with its own risks. (Note: For example, banks that operate in the European Union will have to comply with the General Data Protection Regulation, which provides customers with certain rights over their personal data, including the right to consent to data collection and to access, review, and request the deletion of such data. For more information, see Operational impacts of the General Data Protection Regulation. Additionally, banks will have to comply with the New York Department of Financial Services’ rules around personal data protection, which includes enhanced encryption requirements. For more information, see “Financial Crimes Observer, Cyber: New Approach from New York Regulator). While banks are using AI to improve their identity authentication processes and better detect suspicious activity, malicious actors are also using AI to create new cyberthreats and inject biased data into algorithms’ training sets (Note: A training set is a set of data used to train the AI and which the AI then uses to discover predictive relationships). Vulnerabilities detected in AI systems can then be exploited by intelligent malware and ransomware that learns as it spreads.
Despite these risks, AI will also be part of the solution. In order to prevent such breaches, banks are establishing teams to design, build, and deploy AI systems that not only monitor ways malicious actors could “trick” algorithms but also identify “hot spots” where cyberattacks are surging.
Transparency
AI often functions in a sort of “black box” (see Figure 1), using algorithms that are beyond human comprehension, which makes it hard to (i) decipher how the system reached its conclusion and (ii) verify why a recommendation was made. This lack of transparency is a particular problem for applications of AI in financial services. If an e-commerce website uses mysterious algorithms to suggest a new shirt to a customer, the risks involved are low, but that is not the case when AI-powered software turns down a mortgage application using an algorithm into which the bank does not have complete insight.
Therefore, it is important for banks to deploy AI that is explainable, transparent, and provable, thereby gaining the trust of its leadership, customers, and regulators (Note: For example, the Securities and Exchange Commission’s guidance on automated investment advising requires that banks providing such services disclose to customers information on how algorithms are used and related risks. For additional information, see Robo-Advisers: SEC Steps Up Scrutiny). Further, since AI learns from both historical and real-time data, it is imperative that banks understand if there is any underlying bias in the data, the bias must be addressed. However, as with any other process, if every step must be documented and explained, the process becomes slower and more expensive.
Clearly AI presents great opportunities for banks in the realms of customer experience, cost reduction, and fraud detection. Ultimately, however, banks will have to balance the benefits of AI with the risks that come with its opacity and find which uses work best for their operating model.