Backtesting best practices

Discussion in 'Strategy Building' started by morganpbrown, Feb 5, 2022.

  1. easymon1

    easymon1

    Lookin good. Here's the best part: All our licenses are perpetual which means you can buy once and use the version that you purchased forever
     
    #11     Feb 6, 2022
    Option_Attack likes this.
  2. Yes! And you get free or lower cost upgrades. I had Wealth-Lab for many many years. I liked it but I was constantly fighting its machine-key lock when I changed computers and lack of support after the company was sold. I still like WL, but now it is pay subscription. No thanks.
     
    #12     Feb 6, 2022
  3. Q.E.D.

    Q.E.D.

    I was impressed with WL very early, & made a deal with company to produce a custome version for me, which they did. This was way before WL became successful, & was sold.
     
    #13     Feb 7, 2022
  4. taowave

    taowave

    For the non programmers (moi),I found Quantshare more suitable than Amibroker..
     
    #14     Feb 8, 2022
  5. I have a similar philosophy except:
    1. Its hard to optimize certain models or strategies over too many years and too many market regimes even with advanced adaptation. "10 years at a minimum" feels excessive to me and would cause me to fail to discover many powerful strategies. In my experience 5 to 8 years can be more productive for strategy/model discovery. Less than 3 years is too short and leads to finding strategies that are over-optimized and can't adapt. One can always add more history later for final validation or for multi-stage optimization.
    2. My rule of thumb is I need at least 0.5 to 2 trades per trading day over several years based on the design complexity of strategies I find necessary to generate consistent performance. I've not easily found success below 0.25 trades per trading day. 100 trades /(10 years * 250 days/year) = 0.04 trades/day which is 6 times lower than my personal lower limit on strategy reliability. I'm not saying optimizing over 0.04 trades/day is impossible. Its just beyond my personal skill.
    3. Sharpe ratio and related figures of merit can mimic the idea of "optimize my very small number of parameters to have approximately similar results across these 3 time periods." There are some advanced forms of Sharpe ratio that can do this nicely.
    4. I agree max drawdown is important to minimize. But I find that max drawdown behaves poorly as a figure-of-merit for numerical optimization (especially gradient descent) because the location of the max drawdopwn jumps around the history during optimization as different drawdowns compete with each other to become the maximum. That will screw up an optimizer algorithm. The good old Sharpe ratio or simply mean(returns)/standard_deviation(returns) doesn't have this weakness, doesn't jump around and provides more accurate gradients for optimization.
    5. To me specifying a drawdown as a percent makes sense only in the context of an annual rate of return. I prefer to look at the ratio of max drawdown / average annual return which is independent of leverage.
    6. I agree with the previously mentioned issues with out-of-sample testing. However, out-of-sample measurements over a population of strategies are extremely useful to build predictive models that translate in-sample measurements into predictions of strategy quality (degree of over-optimization, ability to adapt to new market regimes) that are highly useful to pick winning strategies. Typical measurements like max drawdown, Sharpe ratio, percent winners, win/loss ratio, etc do reveal several aspects of a strategy character and performance but don't address the consequences and over-optimization risks of mixing certain strategy designs with certain optimization techniques.
     
    #15     Apr 3, 2022
    PursuitOfEdge and morganpbrown like this.