Dynamic Optimisation at work: I built this dashboard to get more insights into the optimisation algorithm and track the backtest in production. Let's have a look at the poster boy of the trend following community for the last few months: COCOA. It is reassuring that the system (500k, ca. 80 Futures, Trend+Carry+Skew) participated in cocoa despite it being not the cheapest contract. What I do not quite understand is the buying and selling of cocoa for very short time periods. Yes, there are reasons like higher vol and so on, but I am curious: Do you also see such rapid swings in your optimised positions or is this something I should investigate further? (for the record I use buffering and shadow costs)
Here's my Cocoa: Price with 2 trades, in and out: PnL over this period: forecasts (red line is combined) and volatility, yes, it's been increasing lately: So I did miss most of the latest trend completely, and only made some money in the beginning Of course, without DO I most likely wouldn't be able to include this instrument at all due to the lack of capital, so can't really complain.. Having said that, I do think that there's plenty of research opportunities in DO., i.e. I don't think we know all the subtle aspects of it's behavior., and why exactly it chooses to do certain things over others.. E.g. I also noticed that I missed a big trend in orange juice while the system placed other positions and it's hard to know exactly why, figuring it out is quite difficult because of many layers and factors..
IB has a Transaction Cost Analysis report. where you can check the quality of your fills. I keep my own data as well, but IB's TCA is better for instruments where I don't have realtime data. My own data on realtime markets confirms the accuracy of IB's report (within a margin of error since prices can change in the time it take for me to get a quote and then submit an order). One advantage of the Adaptive Algo, is that you always get filled (assuming you don't enter immediately before the market closes or locks limit, which has never happened to me). Adverse selection risk is lower with Adaptive than with SNAP. In theory, someone could A-B test SNAP Mid against the Adaptive Algo, but I'm happy with Adpative and will be sticking with it.
I banged my head against the desk all weekend to dive into the details of DO and avoid these on/off trades in "borderline" instruments. These instruments are those that have a very strong forecast but are so capital intensive that DO only selects them when the forecast hits 20 and volatility is normal. As either the forecast weakens slightly or the volatility picks up such positions will be closed and potentially reopened. Neither varying the cost buffer nor the shadow costs helped me in that case. I came up with the following solution: do not start the optimisation at zero but at optimal rounded positions as mentioned by wopr your greedy search may then search in both directions (adding or subtracting) to an instrument but any proposal must satisfy the sign of the unrounded position. do not calculate the costs to trade to the optimised position from zero. Instead calculate the costs to trade from the prior day to this day. Add that to the tracking error. I multiplied the "cost error" by 50 as suggested by Rob. If you calculate the cost error by trading from yesterday to today, then you can skip cost buffering (it accomplishes the same thing) After I made above changes to the greedy search, COCOA looked like this. I also checked lots of other positions and could identify many fixed instances of flip flopping positions. Tracking error also decreased. What do you think about this algorithm? Does it make sense to you?
Sounds reasonable I guess (btw I think it's a given that costs should be calculated to trade from the current to the proposed portfolio, not from zero to proposed, that's how I coded it anyway, unless I'm missing something).. But what about the total number of trades and overall costs, did they stay roughly the same when starting from rounded instead of zero portfolio?
That sounds like a really useful tool! Would you mind sharing the dashboard, or could you possibly describe how you built it? Any insights or details you could provide would be greatly appreciated by those of us looking to improve DO. Thank you!
The dashboard itself is integrated in my stack and therefore quite individual. But you can easily build it yourself: The dashboard software is Grafana and as a database I use the time series database InfluxDB. You simply collect all the relevant information during a backtest and dump it to the DB afterwards. You could argue that all of this is also possible directly in your reasearch environment and you are right. The beauty of dashboards is that you do not have to interact on a code/command line level with your research environment. Highly useful for all the stuff you want to know on a regular basis. Trades and costs went down a little bit and were overcompensated by smaller tracking error. I still have to play around with it a little bit further. One idea is to not start optimising from the rounded optimal position but directly from priors day portfolio. Not quite sure if I have to change the costs attribution then. I have to test it.
Optimal rounded seems to make sense, but personally I would want to investigate this in isolation before implementing it. Greedy search in both directions... something about this makes me feel uncomfortable. When I was playing with the initial kernel of DO I did experiment with allowing this, but it made the code significantly more complex. Although I can see why you can combine it with optimisation beginning at rounded positions. Again I'd want to thoroughly understand this with lots of small example portfolios before implementing. Well yeah obviously: it makes no sense to calculate costs from zero position - where did you see I had written or coded this? Let me know because I clearly need to made some drastic corrections somewhere. I don't think that's correct. I'm pretty sure they do different things, and that would be the same even if you made the other changes described above. See the discussion here which explains my thought process: "Why shadow cost doesn't help - much" Rob
Must be a misunderstanding from my side. I interpreted w[p] on page 403 of AFTS as the weight of the current solution, not as the weight of the prior day. I guess the "(current)" in the formula of page 402 got me confused. I understand your arguments about why shadow costs do not help that much because of sign flipping in the forecast. Maybe this can be dealt with when starting the search from the portfolio of the previous day and not forcing the same sign there. I'll have to get a deeper understanding by examining some sample portfolios. Thanks for your clarifications, I am really happy to be able to discuss this stuff here.
I've been doing some research into relative momentum as implemented in Rob's latest book. I used my second of 3 copies for this research. Rather than running a backtest, I wanted to see the effect in play. Here's a somewhat complex chart that says a lot (click to see the full image). Basically, I compared the subsequent returns with a forecast, and broke it down per rule variation (x axis) and asset class (y axis). Shows the effect really nicely.