Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial...

18
STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE CASES AND CHALLENGES Sam W. K. Leong CAMS

Transcript of Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial...

Page 1: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE CASES AND CHALLENGES

Sam W. K. Leong

CAMS

Page 2: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

2

CONTENTS

1 Abstract ............................................................................................................................... 3

2 Introduction ......................................................................................................................... 3

3 Background .......................................................................................................................... 4

3.1 Artificial Intelligence vs. Machine Learning ........................................................................... 4 3.2 AML Framework .................................................................................................................... 6

4 Use cases ............................................................................................................................. 8

4.1 Legislation/Regulation Gap Analysis Using AI/ML Techniques .................................................... 8 4.2 CDD/KYC Management Streamlining ........................................................................................... 9 4.3 Transaction Monitoring .............................................................................................................. 10 4.4 FCR Risk Assessment ................................................................................................................... 11

5 Limitations ......................................................................................................................... 13 5.1 Algorithm Bias ..................................................................................................................... 13 5.2 Data Privacy and Regulation ................................................................................................ 14 5.3 Transparency, Auditability, and Traceability ....................................................................... 15

6 Conclusion ......................................................................................................................... 16

7 Research Materials ............................................................................................................. 17

8 Glossary ............................................................................................................................. 18 8.1 Abbreviations ....................................................................................................................... 18 8.2 Commonly Used Terms ....................................................................................................... 18

Page 3: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

3

1 Abstract

Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising capability in self-learning and adaptive capabilities. Applying AI/ML in the financial industry offers significant cost savings and improved accuracy, particularly in the labour-intensive compliance areas. This article introduces some basic use cases in strengthening their ability to fight financial crime risk (FCR) and investigates their inherited limitations in algorithms and data availability issues. It further examines how the effective AI/ML framework can address the rising regulatory concerns about data protection and the use of AI/ML applications.

2 Introduction

The need for a new approach Technology has been helping the compliance industry in the fight against financial crime risk (FCR) for a long time. The current approach to anti-money laundering (AML) is limited by the technology that has been available, which often only financial institutions (FIs) can perform:

Rule-based screening and transaction monitoring

Manual case investigation and decision making

Backward-looking financial crime risk (FCR) risk scores based on a customer’s demographic information

Response and compliance with regulatory changes with manual gap analysis In a majority of cases, they are deemed to not be cost effective and not able to manage the risk in the fast-changing regulatory environments. Despite huge technology spending in compliance areas, the outcome has not been satisfying. Recent research on transactional monitoring1 areas and CDD review has highlighted the size of the issue:

More than 90 percent false-positive transaction alerts

More than 60 minutes spent investigating a single alert

Failure to detect complex AML cases (e.g., sanction countries)

Prolong CDD review process; customer onboarding time takes an average of 24 days2

70 percent of the CDD/KYC cost is spent on repetitive work done by employees3 Financial institutions are turning to new technologies such as artificial intelligence (AI) and machine learning (ML) to fill their compliance gaps, reduce costs, improve operational

1 Quantexa, Contextual Monitoring: Enabling banks to reduce false positives 2 Thomson Reuters, 2017 Cost of Compliance Survey 3 Sia Partners, RegTech Study European Landscape, September 2018

Page 4: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

4

effectiveness, and ultimately strengthen their capabilities in fighting financial crime risk (FCR) in the face of ever-increasing regulatory compliance requirements and burdens. Unlike human brains, AI/ML perform well for certain things but not for everything. This paper focuses on the strengthening of the AI/ML systems and their use in various compliance areas, such as CDD/KYC and transactions monitoring. It presents a balanced approach about how their applications may be constrained by their inherited bias and new data regulations.

3 Background

3.1 Artificial Intelligence vs. Machine Learning

Some scholars and IT companies tend to use the terms artificial intelligence (AI) and machine learning (ML) interchangeably. But the term artificial intelligence was first coined by a group of researchers during a workshop called the Dartmouth Summer Research Project in 1956. It is a sub-field of computer science, and focused on how machines can imitate human intelligence. Machine learning, however, is an application of AI that can process data and allows computers to learn on their own without constant supervision. In some context, AI refers expansively to a combination of ML and non-ML applications. It is not the purpose of this paper to provide a formal discussion on their distinction; however, the distinction allows us to have a better comprehension of the technologies that are being deployed in the compliance program. Unlike human brains, AI systems are typically purpose-built; general-purpose AI is technically difficult or too expensive to build. AI systems can only perform well within particular areas but not in others. Most AI systems have three major capability building blocks:

a) Recognition and Classification: Classification is the process of predicting the class of given data attributes. Classes are the labels or categories. Classification modelling is how to build the mapping between the input attributes to the outcome (class) based on the historical/statistical data.

Page 5: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

5

Figure 1. Machine Learning—Classification

The most common application is an automated classification to comprehend the unstructured data and convert them into meaningful, structured data. Structured data comprise clearly defined data types, for example, a string of names, while unstructured data comprise data that are usually not as easily searchable, including formats such as audio, video, images, and social media postings. It helps to remove the manual work of data input and classification. For example, online Web services such as ParallelDots API is providing deep-learning API for text and audio recognition.4 It helps FIs to classify the customer documents (e.g., passport photo) and convert the scanned documents into the useful structured data by capturing the date of birth, nationality, and passport numbers from the scanned image. That data will be used for further demographic analysis or deriving further risk scores.

b) Intelligent Functions: The core capabilities of the AI systems are marked by their capabilities of Decision Making, Problem Solving, Reasoning, and Prediction. Systems can be trained through:

Supervised learning: presents known results together with recognizable triggers (e.g., inputs or patterns), inclusion of external or existing experience into the system.

Unsupervised learning: requires the system to self-organize the knowledge and outcome without introducing any external classification.

Several AI applications exist that can enhance decision-making capability. For example, a transaction monitoring system will alert users about suspicious transactions, and a negative news screening system will alert compliance users

4 https://www.paralleldots.com

Page 6: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

6

about a material hit.

c) Supporting Functions: Executing, interacting, and communicating outcomes from the decision models. Once the decision-making information has been collected, the AI systems have to interact and communicate with the users from the decision models. There are system alerts generated based on the predefined priorities or operational rules—e.g., transaction monitoring systems generate prioritized alerts based on incident age, transaction amount, etc.

3.2 AML Framework

Traditional AML processing (including onboarding, customer due diligence, and transaction monitoring) is a labour-intensive, manual process.

Figure 2. Generic AML Framework

Customer due diligence (CDD) and know your customer (KYC) are commonly used in conjunction with transaction monitoring and UAR/SAR reporting tools to safeguard the financial industry from financial crime risk (FCR). CDD is the process of obtaining information and documentation to ensure that the FIs have a reasonable belief about the customers’ true identity, expected activity, and purpose of the account. CDD reviews are conducted during new customer onboarding, periodic customer reviews, and unusual activity reports (UARs)/suspicious activity reports (SARs) as the result of transaction activities or ongoing negative news screening (NNS) and negative facts screening (NFS) activities. During the CDD/KYC process, customer demographic information is collected together with the transaction activity to generate a risk rating. The CDD profile

Page 7: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

7

and risk rating will be reviewed by the banks to assess whether the customer is within the tolerated risk appetite and the regulatory environment. Key activities Involved in the CDD process:

Identify and Verification (ID&V):

Identification: Collect information about the customer so that the FIs know who their customers are.

Verification: Ask customers for evidence of their identity by verifying customer information against trusted sources (e.g., Company House/Company Register/government-issued documents/Bloomberg).

Screening:

Screen the relevant parties (including the legal entity, individuals, and their connected parties (see Glossary). That would include negative news and facts, political exposure, and sanctions connections. Where negative news or facts are identified, judgmental assessment is required to determine the materiality to a customer’s overall risk rating.

Risk Rating: Assess customers’ FCR based on predefined risk components; for example, country of business focus and registration, subscribed products and services, and customers’ legal entity setup and employment status.

Extended Due Diligence: For special categories of customers, e.g., politically exposed persons (PEPs), additional information has to be collected to assess the potential FCR inherited from the banking and business relationships.

Business Approval and Customer Exit Management: An FI will review holistically the FCRs of customers at the individual customer level and at the portfolio level, in order to align with the organization’s risk appetite.

Page 8: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

8

4 Use Cases

RegTech, or regulatory technology, is a growing area of technological services. RegTech is the marriage of technology and regulation to address regulatory challenges (Deloitte)5. The marriage of regulation and technology is not new, but equipped with the latest enabling techniques in AI/ML, it is receiving more attention. KPMG 6 has predicted that RegTech will grow from 4.8 percent (in 2017) of all regulatory spending to 34 percent by 2022. With sufficient attention and funding, the next question is how AI/ML technologies can be deployed to solve compliance challenges in the ever-changing regulatory environment.

4.1 Legislation/Regulation Gap Analysis Using AI/ML Techniques The rapidly changing regulatory environment has posed challenges to FIs to update their policies. One example is the implementation of the Markets in Financial Instruments Directive II (MiFID II). The first and original directive was instituted in November 2007 by the European Securities and Markets Authority (ESMA). The new directive from the European Union was proposed to foster fairer, safer, and more efficient markets for all participants. Under the new directive, 30,000 pages of regulatory requirements on policy changes, implied enhancements in systems, and data and controls must be implemented in order to fulfil the MiFID II obligations. It would have easily taken a compliance department half a year to understand it and then perform the gap analysis with the 2007 regulation in order to institute the newest changes. For this, AI/ML solutions would leverage their natural language processing (NLP) capability to scan, evaluate, and perform the gap analysis for huge volumes of regulatory documents. Normally, it would take months for the legal department of an FI to read thousands of pages of such documents before identifying the required changes under the new policy. However, with AI/ML cognitive capability, an FI can automatically and quickly short-list applicable regulatory requirements. Commonwealth Bank of Australia (CBA) completed a RegTech pilot in partnership with the Dutch bank ING in 2018. The banks mapped the applicable regulations from the 1.5 million paragraphs of the MiFID II with 95 percent accuracy.

5 Deloitte, RegTech is the new FinTech. 6 KPMG, Embracing the challenge of RegTech 3.0

Page 9: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

9

4.2 CDD/KYC Management Streamlining

Harvesting Data In recent research, 70 percent of the cost is spent on repetitive work. Particularly in the CDD identification and verification process, it is a labour-intensive process for the FIs to capture, classify, and extract data from scanned images. With the AI/ML capability, an automated process can be introduced to:

Classify and document indexing: Various types of documents, such as passport photos, certificates of incorporation, and memorandums of agreement, are submitted by customers. The AI/ML platforms can be used to classify the documents and scan and index them for future use.

Document analysis: The natural language processing capability of the ML systems can help to identify key information from the unstructured data (e.g., scanned documents). For example, bearer shares capability would be a key information to understand the Beneficiary Structure of the customers, and the ownership structure can be built based on the documents received from customers without manual intervention.

Fraudulent Documents Detection In addition to data harvesting, these platforms include potential functionalities such as fraudulent document detection. Fraudulent documents are commonly used to facilitate terrorism, drug trafficking, etc. The image recognition capability enables the platforms to perform analysis based on the font size, format of the data, and human signatures to determine whether documents are fraudulent. It used to be a complex technique only available to police and other law enforcement agencies. However, it is now commonly available from different software vendors. Automated Screening Activities Screening is performed using rule-based algorithms: for instance, using a customer’s first name and last name, perform a name-based search against a news database. The negative news/facts hits are then further reviewed and discounted through manual reviews. With AI/ML technologies, screening is no longer a linear rule-based screening. The systems are capable of performing the analysis based on the customer-provided data, such as age, gender, and nationality. The profiling can glean information from social media and Internet search engines, etc., in addition to news sources. NPL helps the investigator to perform the

Page 10: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

10

preliminary discounting and thus removing a significant amount of the false alarms. Human resources can then be focused on the material hits.

4.3 Transaction Monitoring

As global trade activities increase, global FIs face the challenge of handling the transactions and related trade documents. HSBC’s Global Trade and Receivable Finance (GTRF) team facilitates over US$500 billions of documentary trade for customers every year. It would not be possible without an AI platform to perform the text analytics to identify and extract key data within the documents.7 In addition to the challenge of volume, the current transaction monitoring (TM) systems rely on the simplistic rule-based monitoring to detect anomalies. The existing approach failed to understand the context of the trade—that is, who is the ultimate originator and the beneficiary of the trades. Due to the rule-based nature, systems can only incorporate limited data points to generate alerts. Usually fewer than 30 data points are used to generate a Level 1 alert. More than 100 data points are required for an investigator to complete an investigation, and this is also the major reason for the high, 90 percent rejection rate (Quantexa)8. With the help of AI/ML, transaction monitoring can be extended into contextual monitoring, where it has the ability to join and connect FIs’ internal databases and the external available data to build the context of the trade. The more complex ML algorithm enables the system to build the context of the trade based on:

Internal transaction data (based on the transaction information)

Internal reference data; for example, from the FIs’ KYC data

External reference source, for Bureau van Dijk (BVD), Bloomberg

Legal Entity Identifier (LEI) by Global Legal Entity Identifier Foundation (GLEIF)9 helps to link up the legal entities without fuzzy matching

It helps the transaction monitoring system to build the context of the trade and determine what is legitimate. The sample below (source: Quantexa) illustrates a situation in which the bank can easily identify that a foreign payer and receiver are in fact the same legal entity that initiated a transaction. In some situations, it helps to supress unnecessary false alarms.

7 HSBC and IBM develop cognitive intelligence solution to digitize global trade. 8 Quantexa, Contextual Monitoring: Enabling banks to reduce false positives 9 Global Legal Entity Identifier Foundation, https://www.gleif.org/en/

Page 11: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

11

Figure 3. Linked Legal Entity Trading, Source: Quantexa 10

4.4 FCR Risk Assessment

A customer’s financial crime risk is usually assessed with a rules-based approach by aggregating various risk factors, including country risk, business risk, legal-entity structural risk (for corporate customers), and product risk. The assessments are very rigid and do not consider the correlations between the risk components. AI/ML technologies, however, can provide a more in-depth analysis on the risk profiles by reviewing the risk in a holistic manner.

𝑂𝑣𝑒𝑟𝑎𝑙𝑙 𝐹𝐶𝑅 𝑅𝑖𝑠𝑘 𝑅𝑎𝑡𝑖𝑛𝑔 = ∑ 𝑅𝑖𝑠𝑘 𝐹𝑎𝑐𝑡𝑜𝑟𝑖 𝑥 𝑊𝑒𝑖𝑔ℎ𝑡𝑖𝑛𝑔𝑖

Final FCR Risk Score

10 Quantexa white paper: Contextual Monitoring: Enabling banks to reduce false positives while catching the bad guys.

Page 12: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

12

Figure 4. Sample FCR Risk Score Calculation

Without AI/ML the FCR risk score is the aggregated results based on a customer’s demographic information and risk factors, including:

Country Risk—countries where the customers have assets, business transactions, business operations etc.

Business Risk—the business nature of the customers; for example, whether customers are involved in international trading or local stores

Legal Structure Risk—the legal entity setup of the customer; for example, individuals, corporates, funds, etc.

Trade Risk—depending on the transaction patterns, volume, and amount Under the risk-based approach (RBA) of the risk management framework, the risk rating determines whether extended due-diligence is required and also drives the frequency of the periodic review. However, this type of backward-looking risk scoring fails to measure the likelihood of a customer to commit financial crime in the future. With the AI/ML capability, additional predictive elements can be introduced to provide a future-looking and more holistic view of the potential financial crime risk. For example:

News screening: The negative news and negative facts screening of the connected parties of customers, and country-related, industry-specific news should be incorporated as part of the final risk scoring. Particularly, information from social media, such as Instagram, Facebook, and LinkedIn, would provide useful information to build a well-rounded risk profile of a customer.

Dynamic risk weighting: Weighting of the risk factors can be adjusted due to the market changes, news information, etc.

Trade information with context: Gain context by looking into the trading counterparties’ business nature and matching the connected parties (e.g., directors/beneficial owners), trading volumes, trading amount, etc.

Page 13: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

13

5 Limitations

5.1 Algorithm Bias

Algorithm bias, also known as machine bias or AI bias, is the phenomenon describing the usage of overly subjective or uniform data sets against groups (such as race, gender, sexuality etc.). Algorithms can have built-in biases inherited from the creators, who have conscious or unconscious preferences. A new Institute of Electrical and Electronic Engineering (IEEE) Standard on Algorithm Bias Consideration is under development (https://standards .ieee.org/develop/project/7003.html). For example, the analysis from ProPublica’s suggested the Northpointe’s tool (COMPAS - Correctional Offender Management Profiling for Alternative Sanctions) used in various U.S. jurisdictions to recommend bail amount and sentencing has racial bias. The analysis found, that black defendants are far more likely to be incorrectly judged to be higher risk of recidivism than white defendants.

Prediction Fails Differently for Black Defendants

W H I T E A F R I C A N A M ER I C A N

Labelled Higher Risk, But Didn’t Re-Offend 23.5% 44.9%

Labelled Lower Risk, Yet Did Re-Offend 47.7% 28.0

Source: ProPublica analysis of data from Broward County, FL

Machine learning has been widely used in providing computer-aided business decisions, such as transaction alerts and customer FCR risk assessments. Thus, it is essential to understand how to remove harmful biases. In order to eliminate potential biases, organizations can take the following measures:

Understand the limitations and shortcomings of an algorithm.

Ensure that the training data are representative of different genders, racial groups, etc.

Review regularly the performance of the algorithm and allow executives to have the authority to decide when to switch to manual review.

Introduce black box testing, a software testing method in which the internal structure, design, and implementation of the algorithm or software is not known to the tester, who can focus on the outcome to pick up consistent bias. For example, the Algorithm Toolkit 11 built by Centre for Government Excellence, can be used to examine algorithm bias.

11 Algorithm Toolkit Reduce Bias from automated decisions made by local governments.

Page 14: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

14

5.2 Data Privacy and Regulation

AI and ML have distinctive features that differ from traditional data analytics and that would lead to more complex issues on how to protect individuals and organizations from potential misuse. There are new regulations regarding the use of big data, artificial intelligence, and machine learning due to these distinctive features of the new technology . The distinct aspects between AI/ML:

Use of algorithms and self-learning Different from predefined data sets and queries, AI/ML often pick up the correlations of the data by running large numbers of different algorithms and optimize the relevant criteria.

Transparency of the automated decision Often the “state of the art” in AI/ML are deploying non-linear algorithms, and it is difficult to trace the reasons for the automated decisions.

Appetite to collect “all data” and data repurposing The big-data approach is usually to include all available data to identify the correlation between the data points. Thus, it has the appetite or tendency to include all data. However, the current Personal Information Collection Statements (PICS) only allow organizations to collect information for a “specific” purpose but not for all purposes.

The details of the data privacy and data usage regulations vary by jurisdictions and regulations. However, there are some common principles or concerns from regulators that the FIs have to pay attention to during the design and implementation of the AI/ML process for fighting FCR. The key principles include:

Fairness and reasonable expectation Organizations need to consider if the use of the personal data in the AI/ML program is within reasonable expectation. For example, EU’s General Data Protection Regulation (GDPR) Article 5 (1)(a) states that personal data must be “processed fairly, lawfully, and in a transparent manner in relation to the data subject.”

Transparency/auditability The EU’s General Data Protection Regulation (GDPR) Recital 12 71 (http://www.privacy-regulation.eu/en/recital-71-GDPR.htm) requires that organizations provide an explanation of a decision reached through automated processing. More comprehensive guidelines are provided by the MiFID II, Title II, Article 17. It states that the regulators may request information about:

Strategy behind an algorithm

Details of the parameters used

Limits within an algorithm

Risk and compliance control in place

12 Recital is a nonbinding description of the law.

Page 15: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

15

Data minimization—collection and retention Organizations are not encouraged to collect data in an excessive manner and data should be kept for no longer than is necessary. For example, Principle 3 of the UK’s Data Protection Act 2018 13 states that personal data should be “used in a way that is adequate, relevant, and limited to only what is necessary.”

5.3 Transparency, Auditability, and Traceability

Often AI/ML systems involve multiple levels of decision making, and it can be challenging for FIs to understand and control systems. For example, in a transaction monitoring system, it is common for the system to use transaction patterns as inferred data, which is data developed around the user without the user’s express input, to assess the need for a red flag. The opacity of the ML solutions poses challenges for regulators, such as General Data Protection Regulation (GDPR). The FIs have to demonstrate the appropriateness and fairness of their AI-based decisions. The adoption of AI/ML solutions requires FIs to develop processes and tools to manage the inherited risk. AI has to be embedded into the existing risk management framework. “Firms do not require completely new processes for dealing with AI, but they will need to enhance existing ones to take into account AI and fill the necessary gaps,” according to a recent Deloitte paper, “AI and Risk Management”. AI transparency and auditability are definitely the major concerns. The paper suggested that the gaps could be filled by enhancing the existing risk management framework with the following areas: Model Risk

Documentation: FIs and their vendors should document the algorithm and processes used for assisting the computer-aided decision, particularly, their understanding of the limitations of the systems used.

Algorithm Bias and Feedback: Inherent bias in data availability or the algorithms used. AI systems are as good as the data that is fed into them. Continuous feedback and learning are required to retrofit the system design and data usage.

Monitor and Report

Agreed reporting Metrics. A methodology, including relevant metrics, must be designed and agreed on among all stakeholders in measuring AI solutions’ effectiveness under a controlled environment.

Transparency. The GDPR demands institutions to describe their data processing and easily accessible information. For example, in a financial crime risk rating system, rather than just reporting the final aggregated risk rating (high/medium/low), they need to provide the reasons behind the automated decision-making (e.g. risk scores

13 The Data Protection Act 2018, UK (May 2018. Retrieved from https://www.gov.uk/data-protection

Page 16: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

16

of the risk components / factors).

Roles and Responsibilities

Throughout the AI life cycle, roles and responsibilities must be clearly defined and documented during the early stages of deployment. In addition, continuous engagement and oversight from key stakeholders (e.g., IT, compliance, line of business, data scientists) have to be agreed on.

6 Conclusion

AI/ML technologies are becoming the core components of the financial institutions’ strategies to deliver cost effectiveness and operational efficiency. However, FIs are at the very early stages of adopting AI/ML solutions in strengthening their ability to fight Financial Crime Risk. It is important for the FIs to understand best practices for adopting AI/ML systems in their compliance framework. There are strong use cases for financial institutions to cut operational costs by eliminating manual processes in the early stage of preparing customer CDD profiles—such as helping to remove the manual effort during the Identification, Documentation, and Verification (ID&V) process. More advanced applications can assist in the risk assessments and approval process. The cognitive feature of the AI systems can help to eliminate false alarms. The compliance staff can then focus on reviewing the material issues. However, the adoption of AI/ML is not like buying off-the-shelf software. It requires understanding their limitations and the regulatory requirements for using AI/ML automated decisions. The firms must first understand the implications of AI from a risk perspective. They have to validate whether the AI/ML functions could fit in the firm’s AML framework, and whether they are suitable. The firms have to demonstrate their understanding of the strategy behind the algorithms and the limitation of how data are used. The FI has to explain how data is correlated and weighted in their AI system in order to address the regulatory concerns on Algorithms bias. It is difficult to explain the automated decision after successive layers processing. The opacity of the AI systems can be improvement by adding intermediate results of each layers. Finally, risk and compliance controls have to be in place to ensure that the AI-related risks can be effectively identified and managed within the limits set by the firms and the regulators.

Page 17: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

17

7 Research Materials

[1] J. Angwin, J. Larson, S. Mattu, & L. Kirchner (May 23, 2016), Machine Bias, ProPublica.

Retrieved from

https://www.propublica.org/article/machine­bias­risk­assessments­in­criminal­sentencing

[2] Information Commissioner’s Office, UK (Sept. 4, 2017), Big Data, Artificial Intelligence,

Machine Learning and Data Protection. Retrieved from

https://ico.org.uk/media/for-s/documents/2013559/big-data-ai-ml-and-data-protection.pdf

[3] The Data Protection Act 2018, UK (May 2018. Retrieved from https://www.gov.uk/data-

protection

[4] Rahmel, Dr. Juergen, and Hussain, Yousuf (Sept. 4, 2017), Artificial Intelligence and

Banking Regulations, Hong Kong.

[5] AI and Risk Management, Deloitte Centre for Regulatory Strategy EMEA. Retrieved from

https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/risk/lu-ai-and-risk-

management.pdf

Page 18: Sam W. K. Leong - ACAMSfiles.acams.org/pdfs/2019/Sam-Leong-white-paper.pdf · Artificial intelligence (AI) and machine learning (ML) are attracting much attention due to their promising

STRENGTHENING FINANCIAL CRIME COMPLIANCE THROUGH ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING: USE

CASES AND CHALLENGES

18

8 Glossary

8.1 Abbreviations

FI According to Investopedia: A financial institution (FI) is a company engaged in the business of dealing with financial and monetary transactions, such as deposits, loans, investments, and currency exchange. Financial institutions encompass a broad range of business operations within the financial services sector, including banks, trust companies, insurance companies, brokerage firms, and investment dealers. Virtually everyone living in a developed economy has an ongoing or at least periodic need for the services of financial institutions.

8.2 Commonly Used Terms

Connected parties As defined by the Hong Kong Monetary Authority Section 83 of the Banking Ordinance; they include, inter alia:

the Authorized Institution's (AI) directors; those of the AI's employees who are responsible for approving loan

applications; the Authorized Institution's controller(s) or minority shareholder

controller(s); any firm, partnership or non-listed company in which the AI or its

controller, minority shareholder controller, or director is interested as director, partner, manager, or agent;

any individual, firm, partnership or non-listed company of which any controller, minority shareholder controller, or director of the AI is a guarantor; and

any relative of an individual who is a connected party of the AI as defined above.

Inferred data Inferred data are data developed around the user without express input.

Declared data Declared data are data that have been willingly shared by the user.