Eliminate Survivorship Bias at least for the S&P 500
The existing backtest tools in Vectorvest are plagued by Survivorship Bias. This makes it impossible to get an accurate simulation of the expected future statistical performance of a strategy based on historical data, because the historical data becomes more and more inaccurate the further back in time you go in the backtest.
Even if you restrict your backtest search Universe to the 500 stocks in the S&P 500 Watchlist, that too is inaccurate. The 500 components contained within the actual S&P 500 varies from year to year. A committee meets each year and retires certain components and adds replacements.
A simple way to improve the VV backtest software would be to at least fix the S&P 500 Watchlist so that its' components are always accurate and correctly vary with the date selected with the date of the search. That is, you could fix the S&P 500 Watchlist to accurately include the correct component stocks based on date.This data is available from the company responsible for the S&P 500.
Without these improvements, the Vectorvest Backtest simulations are not a reliable indication of how well a strategy performed in the past, nor can it be relied upon as an indication of statistical future performance.
In my view, this is the number one problem with the VectorVest platform. Without the ability to perform accurate backtesting, it is really impossible to assess the statistical future performance of a strategy. In the best case this amounts to nothing more than gambling, and in the worst case scenario it can mislead VectorVest subscribers into unnecessarily losing on their investments.
It would be great, if VV would at least improve the S&P 500 watchlist so it tracks the correct stock components over time. I am a graduate of MIT, with a background in mathematics. I would be happy to assist.
-
Cem Kaner commented
I initially dismissed this because I thought the impact would be relatively minor for the research I was doing. For example, I was comparing different VectorVest stopping rules and I expected the impact to be similar across the different conditions (so I could ignore it). However, all of my backtests produced results that were strikingly better than I expected. I was a stock market genius! So I tried some simple backtests that I could check elsewhere and the results came out so much better from VectorVest's backtester than what I think were the actual real-market results that I have no idea whether the impact of its bugs (e.g. survivorship bias) interacts differently with some conditions (e.g. different stopping rules) than with others.
As a result, I no longer use the backtester for anything. I can't figure out how to determine, in a compare-this-versus-that student whether the differences in results come from the differences between my test conditions or from a stronger effect of the bugs on some conditions than others. I can't find a way to trust the data enough for it to be useful.
-
Austin Andruss commented
I too am having issues. The W-O-W Winners search that was a VV model portfolio for the last couple years has the same issues. I ran back tests all the way back to 2003 thinking I had an unbeatable strategy only to then realize the stocks being selected in the back test were not what would have appeared in the actual search on that historical date. This appears to be an issue with any unisearch that filters by a watchlist. Even VV supported watchlist that are supposed to be historical like all the S&P lists and NASDAQ 100.