Image Image Image Image Image Image Image Image Image

July 2015 -

31 Jul

By

The Evidence on the Evidence: How Do You Know What (and Whom) to Heed?

July 31, 2015 | By |

There’s a reason we refer to our strategy for building durable, long-term wealth as evidence-based investing. There are a number of other terms we could use instead: Structured (getting warm), low-cost (definitely), passive (sometimes), smart beta (maybe), indexing (close, but) … the list goes on. But because the evidence is at the root of what we do, we believe it should also be at the root of what we call it.

That said, there is a great deal of evidence to consider, and some purported findings seem to contradict others. How do we know which evidence to take seriously and which are false leads?

Evidence-Based Investing: A Never-Ending Story
First, it’s worth noting that academic inquiry is never fully final, nor does it allow for absolutes in our application of it. As University of Chicago professor of finance and Nobel laureate Eugene F. Fama has said, “You should use market data to understand markets better, not to say this or that hypothesis is literally true or false. No model is ever strictly true. The real criterion should be: Do I know more about markets when I’m finished than I did when I started?”

With this caveat, there are still a number of important qualities to seek when assessing the validity of a body of academic evidence.A Disinterested Outlook – Rather than beginning with a point to prove, ideal academic inquiry is conducted with no agenda other than to explore intriguing phenomena and report the results. It is then up to us practitioners to apply the useful findings.

Robust Data Analysis – The analysis should be free from weaknesses such as data that is too short-term or too small of a sampling; survivorship bias (wherein returns from funds that went under during the analysis period are disregarded); apples-to-oranges benchmark comparisons; or plain, old-fashioned faulty math.

Repeatability and Reproducibility – Results should be repeatable in additional studies across multiple environments and timeframes. This helps demonstrate that the results weren’t just random luck or “data mining.” As AQR fund manager and founding principal Clifford Asness describes, “If a researcher discovered an empirical result only because she tortured the data until it con