Cesar’s First Rule

Much of what I know about the practical aspects of backtesting I learned from two of my colleagues at Connors Research, Cesar Alvarez and David Weilmunster. One particularly important lesson was Cesar’s observation that “if the results seem too good to be true, they probably are”. This situation comes up often enough that I now think of it as “Cesar’s First Rule”.

Not long ago, a client asked me to verify some very strong backtesting results produced by another researcher. I began by looking through the other developer’s AmiBroker AFL code, but didn’t immediately find any errors. However, as any of you who’ve done a significant amount of coding are aware, it can be challenging to dive into someone else’s work and fully grasp all the nuances.

Next I started to rewrite the strategy using my own coding templates, and sure enough, my results were not nearly as good as those from the other developer. It turned out that he was always exiting at the profit target price, even when the price had declined and the trade should have resulted in a stop loss! Since AmiBroker’s default behavior is to restrict the price based on the high and low of the exit bar, some trades still came through as losers  but with a smaller loss than they should have. This had the effect of masking the problem, because almost anyone would have recognized that 100% winners must be an error.

Sometimes even my own results are too good to be true. Recently I was working on a presentation about building adaptive strategies: quantified trading systems that use different rule parameters depending on the current market condition. I was quite pleased when the adaptive version of the strategy outperformed every variation of the static-parameter version over the same time period (2013-2016). But then, Cesar’s first rule started nagging at me. I checked all my code, and all the adaptive parameters, but everything looked OK. Finally I realized that I had selected the adaptive parameters using results from the current time period (2013-2016), which is basically “cheating” by looking into the future. When I selected the parameters based on past results (2000-2012) as I should have done, my adaptive strategy did not perform nearly as well.

Everyone wants to find their own silver bullet of trading. We all hope that we have the ability to make a brilliant discovery. The cold, hard truth is that in most cases if the results seem too good to be true, then there’s a mistake lurking somewhere. Thanks for the lesson, Cesar!

Leave a Comment

Scroll to Top