This year is set to be another busy year on the regulatory front with the heavy lifting on all the big regulatory headliners expected to take place. In particular, regulatory data will take centre stage this year as financial institutions strive to implement solid data foundations upon which to build rigorous compliance initiatives.
However, the hype that surrounds data threatens to make this a far more complicated and burdensome process than it needs to be. We’re being bombarded by data sound bites all the time – big data, data analytics, data visualisation, data integrity – data, data, data! There are so many confusing and contradictory views on good data management that it’s hard to decipher a best practice approach. As Richard Branson is fond of saying, “Any fool can make something complicated”. In my view, the overarching principle for good data management should be – quite simply – simplicity.
Regulatory Data – Reviews and Remediation
In preparation for the regulatory onslaught, financial institutions are (or should be) busily reviewing all data and documentation held on existing client and counterparties to ensure compliance with new regulations with a view to identifying missing attributes, collecting additional information and generally filling in any blanks that may exist.
While banks have no option but to undertake this regulatory data review, the upside to this is that a good percentage of this effort, probably 60-70%, is common to all regulatory obligations. As long as the data is structured and stored in a central repository (either a physical or logical central repository), then financial institutions can re-use this data across a broad range of regulatory obligations, managing the remaining 30 – 40% delta on a regulation-by-regulation basis.
The downside, however, is that for financial institutions that operate predominantly paper-based, manual processes, the client and counterparty review may take considerably more effort. However, this makes an excellent opportunity to transform paper-based data into a digitally accessible, easily extractable structured format. This is when data truly becomes an asset, moving out of the realm of grudge management into the realm of valuable, actionable information. With a little automated help from a targeted toolset (which should include workflow, rules engine, remediation and case management), financial institutions can manage this effort far more efficiently and effectively.
Having completed a number of these initiatives for our own clients, I’ve compiled a five-step process to undertaking this KYC and client data remediation process.
Step 1: Understand, Import and Consolidate
The first critical step in performing KYC data remediation is to understand the minimum viable set of data that needs to be managed. The aim is to reduce the set of data to be managed – not increase it. Often large data sets are dragged around the institution due to interdepartmental silos and non-sharing of data. By understanding the minimum data set required to facilitate classification, this will reduce the effort involved in managing and maintain this data. When all pertinent client / account data and documentation has been identified and collated from key repositories around the financial institution into one physical or logical place, a consolidated profile of the client / counterpart can be created and the KYC remediation process can commence.
Step 2: Client Data Remediation & Rules Engine
The second step involves cleansing and remediating the data, including all contradictory data, to create a golden source data record. This updated profile can be used to identify if there are any missing or incomplete data or documentation with regards to existing regulations such as KYC and AML regulations and imminent regulations such as FATCA, Dodd-Frank, EMIR and MiFID II. If using a rules-based compliance engine, the financial institution should be able to create a checklist of data and documentation required to comply fully with each regulatory obligation. Each regulation will have its own specific requirements in terms of:
- the data and documentation that needs to be captured
- the KYC questions that need to be answered
- all the regulations that need to be supported
- the risk assessment that needs to be undertaken
- the classifications that needs to be completed.
The automated system should then compare this list against the data attributes held in the client / counterparty profile to determine what exists, if it’s in date and what’s missing.
Step 3: Identify and collect missing or incomplete data and documentation for compliance
If the client or counterparty profile is missing vital data or supporting documentation, it is deemed to be out-of-compliance. Therefore, efforts need to be put in place to identify and collect these key missing data attributes. Financial institutions should use targeted workflows to automate the capture of client data and documentation and facilitate such capture through user-friendly, self-service portals. This not only enables the financial institution to better manage client communication, it empowers clients to submit updated or new information and documentation securely and conveniently. This can also enhance the collection of complete and accurate information through a real-time validation facility. As each of these data attributes are collected and processed, they should be logged on the document checklist that ensures all required supporting documents are in place.
Step 4: Performing Classifications of Clients / Counterparties / Accounts
Once all the data and documentation is in place, the financial institution is now in a position to classify the client / counterparty / account in accordance with each regulation, enabling the bank to prove completion of screening to auditors and regulators with evidential documentation. Again, a rules-based compliance engine will handle all of this, leaving the compliance team to manage the exceptions.
Step 5: Push Golden Source Client Data through the Institution
Armed with an updated client / counterparty profile, this is a great opportunity to propagate this true golden source data throughout the institution by updating each of the key banking systems identified in the first step using the same upstream and downstream integration tools.
While perceived to be an arduous process, reviews of client / counterparty data and documentation can actually prove beneficial – not just from a regulatory point of view – but also from the perspective of having a golden source of data that contributes to a fuller, more accurate profile of the client / counterparty, which can lead to identifying profitable upsell and cross-sell opportunities.
Download our whitepaper on KYC and AML Client Reviews
In this paper, Joe Dunphy, Fenergo’s VP Product Management, explores the role that KYC periodic reviews has to play in the levying of fines in this area and contends that the big problem with KYC is KYD (Know Your Data).