Here is the famous Fidelity Magellan fund. Peter Lynch managed the Fidelity Magellan Fund from 1977 to 1990, during which time the fund's assets grew from $20 million to $14 billion, just in time to experience more losing years than winning years. More importantly, Lynch reportedly beat the Morningstar specified benchmark for 14 years. However, as you can see, starting in 1991, 14 of 21 years resulted in a negative alpha and only 4 years of those years had respectable alphas. The $14 Billion of investor's assets did not fare well. Based on the overall track record of Magellan, 23 years of returns data is required to be 95% confident of skill. Can you get 23 years of data from your active manager?
If Peter Lynch knew how to beat the market, why couldn't he teach his successors? Or, was it just luck? Why is it that only Peter Lynch and Bill Miller achieved such results if mispriced stocks are so easy to identify and exploit? How many other managers attempted the same feat? It is far more likely that markets are very efficient and extremely hard to beat with statistical significance.
We also took a look at Warren Buffett's (Berkshire Hathaway's) alpha since 1980, relative to the Russell Large Value Index (Russell 1000 Value). It is important to note that many factors other than Buffett’s stock-picking acumen have determined the returns received by Berkshire investors, particularly the extent to which the market price of Berkshire through time has reflected the market’s expectation of the value added by Buffett’s undeniable capital allocation ability, not to mention his rolodex which has facilitated some highly lucrative deals such as Berkshire’s investment into Goldman Sachs during the depths of the 2008 financial crisis. One interesting note is the 30%, 40%, two 35%'s and two 60%'s excess returns in the periods prior to 1999. None of these huge excess returns over a benchmark are repeated after 1999. Buffett's value at the helm may have been fully priced into Berkshire’s share price since that time. Splitting the overall time period into two approximately equal pieces illustrates this point. In the second half, the outperformance relative to the Russell 1000 Value Index is statistically insignificant. This can be seen by clicking on the buttons at the top of the chart. The first button reveals a statistically significant alpha for the entire period, but the second button indicates that almost the entire alpha occurred in the first half, and the third button shows that the alpha of the second half is not even close to statistically significant.
In calculating the t-stat, the first step is to determine the excess returns the manager earned above an appropriate benchmark. Then we determine the regularity of the excess returns by calculating the standard deviation of those returns. Based on these two numbers, we can then calculate how many years we need to support the manager’s claims.
Of the 80 fund managers who had positive excess returns, the average excess return was 0.84% and the standard deviation was 5.64%. To estimate the years needed for statistical significance, you can find the intersection of the average excess return (about 0.8%) and standard deviation (about 5.6%) in the chart below (see data box for point estimates). Then follow the line out, and you can see that 180 years of returns data are needed to establish skill as the reason for the higher returns. The calculator below the chart provides the exact number of years needed. Obviously, no manager has ever managed a fund for 180 years; therefore, we are unable to accept any of these manager’s claims. Alas, managers are mere mortals.
Figure 3-6D22 - Three Factors of Performance Measurements
The Figure below shows the formula to calculate the number of years needed for a t-stat of 2. We first determine the excess return over a benchmark (the alpha) then determine the regularity of the excess returns by calculating the standard deviation of those returns. Based on these two numbers, we can then calculate how many years we need (sample size) to support the manager’s claim of skill.
Figure 3-6D23 - Sample Size Calculator for Active Manager Alphas
As you see in the calculator above, the t-stat is held at 2. Understanding why a t-stat of 2 or more is considered statistically significant is important. However, it is vital to simply grasp why bigger t-stats mean the value is more “reliably” different from zero. To begin with, refer to the following equation defining a t-stat:
or t-stat = (average x √Observations ) / standard deviation
Decomposing the elements of this equation can demonstrate what leads to bigger t-stats and help instill the intuition behind why a bigger t-stat implies that the observed value is less likely to have a true value of zero.
“Average” is the average of all observations in the sample. This parameter is in the numerator, so as the average increases, so does the t-stat. To illustrate, consider the two data series below:
Series A: 1, 2, 1, 2, 1, 2, 1, 2, 1, 2
Series B: 9, 10, 9, 10, 9, 10, 9, 10, 9, 10
Both have the same number of observations and the same standard deviation. But series A has an average of 1.5 and series B has an average of 9.5. As the average increases, so does the t-stat, meaning it is less likely the true average from series B is actually zero.
The intuition here is that a mean further from zero makes it less likely that the true value is in fact zero.
“√N” is the square root of the number of observations. This parameter is also in the numerator, so as the number of observations increases, the t-stat does as well. Consider the two data series below:
Series A: 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2
Series B: 1, 2, 1, 2
Both have the same average of 1.5 and the same standard deviation of 0.5, but series A has 20 observations and series B only has 4. As the number of observations increases, so does the t-stat, and the observed average becomes more reliable. In this example, series A has a t-stat of 13.4 and series B has a t-stat of 6 due to the difference in the number of observations. This means series A is more reliably different from zero than series B.
The intuition here is that a larger number of observations results in more reliability.
“Standard deviation” is a measure of how much the individual observations in the sample vary from the average. This parameter is in the denominator, so as the standard deviation decreases, the t-stat increases. Consider the two data series below:
Series A: 9, 10, 9, 10, 9, 10, 9, 10, 9, 10
Series B: 18, 0, -18, 32, 10, -20, 40, 15, 8, 10
Both have the same 9.5 average and the same number of observations, but series A has much less volatility and a lower standard deviation than series B. As the standard deviation increases, the t-stat decreases, so the average from series B is less reliably different from zero than the same average from series A. Said differently, there is a greater likelihood the 9.5 average from series B happened by chance due to the volatility of the data series.
The intuition here is that a more volatile data series results in a mean that is less reliably different from zero. Here is a calculator to determine the t-stat. Don't trust an alpha or average return without one.
The Fama and French Risk Premiums are good examples of the use of the t-stat. Based on the long term data, there has been an excess return for exposure to these risk factors, referred to as the US Equity Premium (Risk of the Total Market - Risk Free - 30 d T-Bill), the US Value Premium (High Book to Market - Low Book to Market), and the US Size Premium (Small Companies - Big Companies). An important consideration for investors is the likelihood that these risk “premiums” are actually zero (i.e., there is no premium) despite a historical mean that is positive. As discussed, the starting point is calculating a t-stat for each return series as outlined in Table 1 below. The t-stats in Table 1 are all considered statistically significant (i.e., greater than 2), and we can almost be 99% sure that all three risk premiums are positive, with only the SMB t-stat being marginally lower than the required 2.6 for that level of significance.
All three data series have the same number of observations, so differences in their t-stats will be a function of different means and standard deviations, as illustrated in Table 2 below.
As you can see, the equity premium is the most reliable (i.e., different from zero) despite having the highest volatility because it has a significantly higher mean to go with it. Conversely, the size premium is less reliable than the value premium despite having nearly the same volatility because it has a lower historical mean.
In “Challenge to Judgment,” Paul Samuelson dismisses investors who claim they can find benchmark-beating managers by saying, “They always claim that they know a man, a bank, or a fund that does do better. Alas, anecdotes are not science. And once Wharton School dissertations seek to quantify the performers, these have a tendency to evaporate into thin air—or, at least, into statistically insignificant t-statistics.”
Although a few managers will occasionally appear to have reliably delivered alpha, IFA cautions investors that the fact that there are so many managers virtually guarantees that there will be some who appear to have demonstrated true skill. Unfortunately, the number of such managers is no higher than what we would have if all of them were monkeys throwing darts at the Wall Street Journal. Two studies that elegantly address this point are:
Rob Silverblatt of U.S. News and World Report spoke with Eugene Fama about the implications of the “Luck versus Skill in the Cross Section of Mutual Fund Alpha Estimates” study conducted by Fama of the University of Chicago and Kenneth French from Dartmouth, which casts serious doubt on managers’ ability to generate alpha. Here is his interview:
Why did you decide to study luck?
[Fama] "This is the basic problem. You have several thousand mutual funds out there. When you look at the results over their whole histories, there’s a huge range of results. The winners are big winners and the losers are big losers. So the problem is to judge what the world would look like, what the cross section of performance would look like, if there were no skill in the population. That’s what this paper does, it constructs experiments that maintain the characteristics of mutual fund returns, but we set them up knowing that there is really no [skill]."
So just how lucky are fund managers?
[Fama] "If you look at the top 10 percent, they’re [comfortably] outperforming their benchmarks. …Those are the people that people would write books about. But it turns out that if you look at the distribution that you’d expect by chance, you’d expect more of them out there."
As for the ones that do get good returns, does that mean they’re good stock pickers?
[Fama] "There are always people on the top; that’s the point. People make the wrong inference. There are people that are big winners, but there are fewer of them than you’d expect than if they were just lucky."
Can any managers truly be counted on to add alpha through skill alone?
[Fama] "You can’t tell from the net returns. Now if you give them back their fees and expenses and just look at their portfolio returns, then you find some evidence that there are funds out there that might have some skill, but it’s absorbed in fees and expenses."
What do your findings mean for the role of active management?
[Fama] "Don’t be misled by past performance. There’s lots of other evidence that shows that performance doesn’t persist--that the past winners aren’t the future winners and that basically what happens after you rank them as winners is random. And this is consistent with that: It’s basically saying that the winners are just lucky."
Figure 3-6E illustrates the results of this study. This article from Forbes.com also discusses this study.
Even professional stock pickers can fall hard. Bill Miller, chief investment officer of Legg Mason Capital Management and portfolio manager of the Legg Mason Capital Management Value Trust and Value Equity Strategy, lost his Midas touch after a long stretch of beating the S&P 500. On November 17, 2011, the company announced that Miller will be stepping down effective April 30, 2012. Formerly a former Morningstar “Fund Manager of the Decade,” Miller seemed to glitter throughout the 90’s only to have his sparkle go dim towards the end of the following decade. His fund grew from $750 million in 1990 to more than $20 billion in 2006. As of November 16, 2011, total assets are down to $2.8 billion. His Legg Mason Value Trust Fund (LMVTX) is portrayed in Figures 3-A, 3-B and 3-C, showing the risk and return results of his fund for three different time periods, compared to various indexes and index portfolios: Figure 3-A for the decade of the 90s through 2000; Figure 3-B for the ten years from 2001 to 2010; and Figure 3-C for the 28 years and 8 months since the inception of the LMVTX fund.
As the first chart clearly shows, LMVTX did earn higher returns than the S&P 500 and the index portfolios during the 90s, but with significantly higher risk—a risk that eventually caught up with Miller. In a January 6, 2005 article in The Wall Street Journal, Miller accounted for his winning streak saying, “As for the so-called streak, that’s an accident of the calendar. If the year had ended on different months it wouldn’t be there. At some point, mathematics will hit us. We’ve been lucky. Well, maybe it’s not 100% luck—maybe 95% luck.”
Figure 3-B shows just how hard the mathematics did hit Miller. Despite the fact that his “so-called streak” showed him to outperform the S&P 500 for a 10-year period, Miller’s subsequent 10-year returns from 2002 to 2012 pale in comparison to the indexes and index portfolios shown. Miller’s outperformance and subsequent underperformance were the result of his excessively risky bets on concentrated investments among highly correlated stocks. While equity index portfolios invest across many asset classes and invest in as many as 12,000 companies in 40 different countries, Miller’s strategy was to “place big bets on stocks other investors feared,” cites a Wall Street Journal article, “The Stock Picker’s Defeat.” According to the December 2008 article, “Mr. Miller was in his element [a year ago] when troubles in the housing market began infecting financial markets. Working from his well-worn playbook, he snapped up American International Group Inc., Wachovia Corp., Bear Stearns Cos. and Freddie Mac. As the shares continued to fall, he argued that investors were overreacting. He kept buying.” The article continued, “What he saw as an opportunity turned into the biggest market crash since the Great Depression. Many Value Trust holdings were more or less wiped out. After 15 years of placing savvy bets against the herd, Mr. Miller had been trampled by it.” Miller stated, “The thing I didn’t do, from Day One, was properly assess the severity of this liquidity crisis... I was naïve… Every decision to buy anything has been wrong…It’s been awful.” Not only did the assets themselves plummet, but investors bailed on the fund pushing its assets down from its apex of $21 billion to around $4.2 billion.
At one point, Miller said, “The S&P 500 is a wonderful thing to put your money in. If somebody said, ‘I’ve got a fund here with a really low cost, that’s tax efficient, with a 15 to 20-year record of beating almost everybody, why wouldn’t you own it?’”
Figure 3-C shows that over the lifetime of the LMVTX, several indexes and index portfolios outperformed the LMVTX with lower risk than the LMVTX, and the more appropriate benchmark of U.S. Large Cap Value beat Miller with less risk.
Miller’s so-called streak was based on bad benchmarking. LMVTX was far riskier than the S&P 500, a reality most investors certainly did not understand—especially investor Peter Cohan who lamented to the Wall Street Journal, “Why didn’t I just throw my money out the window and light it on fire?”
Morningstar ranked Miller’s fund as one of the top 3 losers for fund performance in June 2011. Bloomberg News reports that Russel Kinnel, Morningstar director of mutual fund research said, “People assume because certain managers have had good streaks that they are always going to be a step ahead of the market. It never works out that way.”
As a final point in the story of Bill Miller, here once again, is the Alpha chart for the Legg Mason Value Trust. The 401 years needed to justify statistically significant alpha tells the whole story.
This is a lesson for long-term investors who pick fund managers whom they believe are skilled in stock picking. In this case, the manager is leaving the fund after a roller coaster 30-year career. It might be a good idea to put a warning on the Legg Mason Value Trust prospectus reminding investors that luck is not a reliable source of returns in the future – maybe something along the lines of the health warning on a package of cigarettes.
See this article for more Lessons from Bill Miller: Don't concentrate, don't style drift, and nobody can beat a risk adjusted market over long periods. Invest right, sit tight. Also see the Quote of the Week #45.
The studies mentioned above represent only a sampling of the mountain of research that have been stockpiled over the years. The impact of the research can best be summed up in the words of Henry Blodget, former securities analyst turned financial journalist: “Academics have essentially proved that active fund management for the fund customer is a loser’s game. The vast majority of active funds underperform passive benchmarks. So, the vast majority of customers of active funds pay billions of dollars in exchange for, at best, nothing.”
|All of the
chances above are quite poor and are unreasonable odds based on
the the fact that the average actively managed mutual fund is
about three times the cost of an index fund (1.5% versus 0.5%).
So you pay three times the cost with only a 3% chance of winning. Other
studies indicate a zero chance of winning. As Larry Swedroe
has said, investors who buy actively managed funds should wear
a shirt that says, "I
can't add." Essentially investors are being fooled by
randomness and poor statistical information that is being provided
by active managers. Your better understanding of statistics will improve your ability to ignore the siren songs of active
management and better manage your investment portfolio.
If the average index fund charges 0.25-0.5% and the average active mutual fund charges 1.5%, there is already an innate cost associated with active management even before taking into account that active management underperforms the respective index. What exactly are investors paying for? According to hundreds of studies, it appears that investors are paying for nothing more than false hope or promise. They are just speculating, and the expected return of speculation is zero, minus the costs of speculating. This means that as a group, active investors obtain the return of the market they play in, minus their cost of playing. As Nobel Laureate William Sharpe says, "why pay people to gamble with your money?"
to predict the outcome of a coin toss is a futile endeavor. Unless
the coin is rigged, the only way to make a correct prediction is to
guess blindly. Unfortunately, it is with the same disregard for investors’
financial health that the financial institutions and media perpetuate
the false idea that some people have a gift or method for predicting
future stock price gyrations.
In a study by Walter Good and Roy Hermansen, a hypothetical coin flipping experiment was compared to mutual fund manager performance. Three-hundred college students were asked to guess the outcome of 10 coin tosses. Their guesses were tabulated and charted. The performances of 300 mutual fund managers were then tabulated for 10 years (1987 to 1996) from Morningstar® Principia®. See Figure 3-7.
The number of years that the mutual fund managers were rated in the top 50% of fund managers was then counted and compared to the ability of college students to correctly guess the outcome of the flip of a coin. The results were nearly identical.
An interesting point was raised by a hypothetical nationwide coin toss. In this example proposed by Warren Buffett, 225 million Americans are given one silver dollar and expected to flip it once per day, with heads winning and tails losing. After 25 consecutive days, the statistical result would be comparable to six people flipping heads for 25 days in a row. These people would be regarded as geniuses for being so masterful at flipping coins. This is nonsense, of course, but it would do well for investors to see mutual fund managers as the six masterful coin flippers rather than geniuses, gurus or all star analysts.
In a study by Walter Good and Roy Hermansen, 300 college students' guesses of the outcome of ten coin tosses were simulated. The performances of 300 mutual fund managers were tabulated for ten years (1987-1996) from the Morningstar Principia database. The number of years that they were rated in the top fifty percent of fund managers were then counted and compared to the simulated ability of college students to correctly guess the outcome of the flip of a coin.
Index Funds Advisors, Inc. — 19200 Von Karman Ave., Suite 150 — Irvine, CA 92612
Call Toll Free: 888-643-3133 — Local Phone: 949-502-0050 — Fax: 949-502-0048 — Email:
For several other offices and representative locations, see About Us.