Research design can have a huge impact on the ways policies are evaluated and how new policies or policy changes are enacted in response to research discoveries. The conclusions drawn from research have enormous potential to inform later decisions on banking regulations, for instance, specifically in financial services where consumers are impacted the most.
Using the Dodd-Frank financial regulation law as a test case, we can see how survey data and empirical analysis yield either conclusive or equivocal answers to pressing questions based on the quality of research design.
After the 2008 mortgage crisis and the subsequent downward spiral that threatened the world economy, lawmakers acted quickly to instate new regulations that would hopefully prevent the next disaster. Called the “Dodd-Frank Wall Street Reform and Consumer Protection Act,” it was signed into law on July 21, 2010 and has triggered no small amount of controversy ever since.
Accusations that the act has either gone too far or not far enough fly from both sides of the aisle. Unfortunately, these assertions are more often than not backed by purely anecdotal evidence or misrepresented data from problematically small sample sets.
What the American public desperately needs are more robust evidential results that can guide policy-making to ensure that financial regulations are effective at preventing bad practices while not hampering the necessary day-to-day financial activities that drive the economy and keep American households both content and prosperous.
At the center of the issue are the bankers themselves. Namely, small banks that have the biggest impact on local economies because of their abilities to lend to aspiring home and business owners. Even Federal Reserve Chair Janet Yellen confessed that “the regulatory burdens that they face have been really quite high and they're struggling with it.”
The problem is that while everyone recognizes the potential for Dodd-Frank to unintentionally encumber small community banks, language like that of one Colorado representative who said that they “sat down with community banks in my district” does little to point out exactly where the bottlenecks occur and how they can be solved.
One survey dispersed by George Mason University attempted to gather data on the issue, but their survey design was admittedly flawed despite the conviction of their results. “Because the method that we used to deliver the survey was not random, the survey results may not be representative of the general population of small banks,” the writers of the report candidly disclosed. They also recognized that “our respondents may be more attuned to regulatory issues than other small banks, because they learned about our survey largely from trade associations.”
Another potential research design flaw that they faced was that the ninety-six question length of the survey “may have dissuaded some potential participants.” One bank even quipped: “we are too busy working on Dodd-Frank to fill out your survey.”
While the act of sending out the survey no doubt gathered informative data, the self-reporting aspect and the cross sectional survey design gave no true measure of how these small banks were being impacted over time. None of these points are meant to criticize the hard work that the researchers performed, but merely illustrate that survey design and research design can have a dramatic skewing effect on the quality of data the study produces.
For comparison, consider this study from the non-profit group Financial Services Assessment on the impact of mobile banking vans in rural Malawi. Their research design encompassed field data collection, longitudinal observation of randomly-selected income-earning households and recognition of the factors that could distort data. Separate from the quality of their ultimate data was the contributions they made to field research and the extensive documentation they provided so that similar studies can attempt to reproduce their results.
In the end, their research emphasis shifted from “do banking vans improve the financial stability of rural communities?” to the emphatic realization that aggregate transactions data can allow local banks to more accurately “view the financial preferences and behaviors of consumers,” and, consequently “develop better products and delivery mechanisms.”
Banks both large and small should recognize the importance of such analytical studies. The act of gathering data and carefully approaching research design can yield more valuable troves of knowledge. This knowledge can drive policy decisions in the private sector while also delivering evidence to public policy makers of how the best financial interests of the American people can be upheld.