Regulatory & Compliance

Watching Which Way the Wind Blows: You need good forecasting to build good compliance

When you check your weather app, and it says there is a 30 percent chance of rain, have you ever thought about what that actually means? If it says 80 percent, do you assume it will rain (even though there is a 20 percent chance it will not)? If it says there is a 60 percent chance of rain and you see clear skies, are you disappointed or happy?
Weather forecasting is actually quite accurate and has improved significantly over the last century. Forecasting in general helps us make decisions about virtually everything we do. And understanding why we make the choices we make helps us make better decisions.
Companies thinking about their compliance programs make the same types of forecasts. But the process for making those forecasts is often not very good. Regulatory guidance such as the U.S. Sentencing Federal Sentencing Guidelines provides that an effective compliance program requires an organizational framework with someone in charge (usually a compliance officer or the general counsel), policies and procedures for the program, a risk assessment process, channels for training, communication and program awareness, a monitoring and audit function, and a process for responding to compliance failures through investigations.
Companies have long recognized the importance of this government requirement. Each year, compliance programs use risk assessments to try to figure out how to allocate limited resources across their programs, develop risk mitigation strategies and action plans, and develop program materials such as policies, procedures and training based on the results of the risk assessment.
Risk assessments are not investigations of historical failures or challenges, but forward-looking forecasts to predict what lies ahead. However, many company risk assessments are fundamentally flawed: Instead of focusing on the future and tapping into the knowledge of the business, they are audits that test various components of the compliance program. External law firm risk assessments are often the worst offenders of this type of historical thinking.
Effectively assessing risk involves forecasting – identifying stakeholders from different business functions who are knowledgeable about various risks and asking them to project the likelihood and impact of those risks. Effective compliance programs use the results of this forecasting to focus resources on the highest risk. If the stakeholders tell you that the environmental risk of emissions releases is particularly high with a serious impact to the company, the compliance program may partner with the business to mitigate the risk by implementing controls and investing in equipment.
Philip Tetlock is the dean of forecasting using this methodology. In 2011, Tetlock began competing in a tournament funded by the Intelligence Advanced Research Projects Activity (IARPA) in which forecasters were challenged to answer the types of geopolitical and economic questions that U.S. intelligence agencies pose to their analysts. Over several years of participating in the IARPA tournament, Tetlock’s team decisively beat the other teams and the intelligence community’s own analyst. His seminal book, “Superforecasting: The Art and Science of Prediction” (written with Dan Gardner) discusses how he did it. His book begins with what should be obvious to all of us: “We are all forecasters.” Each of us, Tetlock recognizes, engages in forecasting in our daily activities by making decisions based on how we think the future will unfold. Some of us are just better than others. But, according to Tetlock, forecasting can be cultivated.
In his book, Tetlock lays out a number of principles to help companies with risk assessments and forecasting – traits that make people “superforecasters.” Superforecasters are important for compliance because they are the stakeholders who are going to help you identify which compliance risk/program area the company should focus on based on their experience with the business, regulators, and operating environment.
Based on Tetlock’s methodology, here’s what companies should do:
1. Find the right people or forecasters.
To analyze risks across different subject matters, it helps to have subject matter expertise (or experts), but it’s also important to have people who have day-to-day operational experience dealing with the subject matter and its risks. For instance, to assess the corruption risk of using third parties outside the United States, the risk assessment process could potentially involve legal/compliance, supply chain, audit, finance and operations. Each of these groups may express different views about the types of risks associated with third parties. Their experiences would vary significantly. The finance group may tell you about the payment history and controls for third parties. Operations and supply chain may provide insight into how third parties are used and why. Audit may tell you some of the historical issues they’ve had with the financial controls. And legal/compliance could provide insight into the legal risks and compliance framework. Consolidating these views allows you to create a mosaic that tells you something about the probable risk of a compliance failure involving third parties.
2. Ask the right questions.
When collecting information on future risks, the questions you ask should focus on historical data that you’ve collected about the compliance program. For instance, if you are evaluating the health and safety program risks, you may want to know the headcount of different operating units, the turnover rate, training statistics, the accident rates and cost. You may also want to review controls and policies the different operating units have implemented, historical health and safety issues and recent or anticipated regulatory changes. The questions you ask the forecasters should draw out their expertise and experience in the subject matter you are evaluating.
3. Focus on sub-risks.
Each risk assessment topic (e.g., corruption, privacy, trade, health and safety, environmental) has different sub-risks or specific risk areas that pose a compliance risk. For instance, under health and safety, a company may face several sub-risks, including failure to comply with fire prevention laws and to maintain safe equipment. The risk assessment process should provide each forecaster with a template to predict which sub-risk or specific issue is most likely to occur within a given subject matter. For example, if the forecasters were asked to use a category scale (a numerical scale with corresponding descriptions) to score the likelihood of a specific event occurring and the reputational and financial impact of the event on the organization from 1 to 3, the scoring would allow the company to identify where it should focus its resources (assuming each forecaster was provided with consistent instructions and was equally skilled at producing an accurate result).
4. Get external as well as internal views.
Each organization has its own view on risk and the probable likelihood and impact of compliance failures. But external stakeholders may have different perspectives based on information unknown to the company. An outside law firm may know of potential regulatory changes that could impact a risk forecast. Another company may have had experience with a compliance failure that makes the company’s forecast less accurate. Benchmarking and collecting data outside the company is an important part of calibrating risk forecasts to ensure effectiveness.
5. Account for bias.
Each of us uses our own life experience to make forecasts. These experiences can form bias about the probable outcome of events. Human nature causes us to grab onto our first inclination based on our experience and gather evidence to support our view – ignoring evidence that undercuts it. Tetlock calls this “confirmation bias.” To account for this bias, forecasters should seek out evidence that cuts both ways. We are susceptible to other biases as well. For example, William Poundstone, in his book “Priceless: The Myth of Fair Value (and How to Take Advantage of It)” writes about “anchor bias.” In his book, Poundstone shows how an anchor can create bias that changes our predictions (for instance, the correlation between a jury award and a request for damages by a plaintiff’s attorney). In the risk assessment context, training or other efforts focused on a particular subject matter may create an anchor bias for that subject matter that may skew the risk assessment results. If each year, the compliance department provides training to all employees that discusses the billions of dollars in penalties the Department of Justice collects in corruption cases annually, that information may impact the forecast of the company’s Foreign Corrupt Practices Act program relative to other subject matter programs – even though another subject matter may pose greater risk for the company given its maturity and controls in place. Understanding and minimizing bias is critical for effective forecasting.
Superforecasting does not require a high IQ. Superforecasting requires the ability to break down big questions into their component parts, find the right balance between inside and outside views and own your failures and successes – that is, figure out where you went wrong and what you did right. According to Tetlock, superforecasters reconcile subjective judgments, while striking the right balance between under- and overreacting to evidence.
Superforecasters are key to effectively assessing compliance risk. Getting it right could help the company provide significant savings down the road by avoiding a compliance failure. Or at least it gives the company a good foundation on which to explain to regulators its rationale for focusing on specific compliance areas over others. Like weather forecasting, if we understand the methodology, it’s easier to understand the model. And we hope it will be easier to live with the results.

Published .