Well crap, acrary. Maybe too much, too fast. Sorry if my questions opened the floodgates. I'm sure time will allow you to get this stream of consciousness back on course. So far, so good, all in all. Don't worry about the trivial mistakes. I think I always expect more from your posts because I know you are smarter than me. But instead of that let's look at what I've got so far, that moving average system from JWH for one thing. This Journal, although covering stuff I've learned and forgotten, is undoubtedly useful to some people who haven't encountered anything like this before. And the promise of better days ahead. Best of luck with this effort. Just know, I am behind you as you go forward and that you can do what you firmly believe you can. Keep fightin'.
Sorry if this is off topic, but might I ask what you consider to be better than neural networks? -bulat
Sorry I was away yesterday.Weather forecast for yesterday was great and today was supposed to be bad so I went flying. Today I'll be around for most of the day. In my haste to get finished with last topic I made a mistake. I went back this morning to try and understand how models 1 & 3 couldn't do better than 1,2, and 3 combined. With the difference in the two numbers being about .2 I expected different results. It's also been my experience that a .2 improvement shows in the overall performance numbers. When I re-ran the tests, I found the correct number for 1 & 3 was 1.166 not 1.66. Sorry for the mistake. I'm including a edited copy of that post for anyone cutting and pasting this stuff. Correlation cont'd. With the one number I now have a way of doing comparisons. If I combine say model 1 and model 2 and run the same tests I'll get one result. If I combine model 2 and 3 I'll get another. And if I combine 1 and 3 still another. Then if I do many runs in which I weight each pass say 1 unit of model 1 and 2 units of model 2 I'll get more results. In the end what I found I had to do was develop a program to do all these passes. It weighs each model from 1 - 100 units and determines the optimal modified sharpe ratio. Then it determines the ratio between each of the methods to determine how much of each should be traded. If I have 4 models it'll do tests on all four of them, five, six, 10, 50, etc. All it takes is compute time. For 3 models it takes about 15 sec. for 30 it takes about 1 1/2 hours. If I choose to do all my models, I leave for a couple of days. In the end it gives me the optimal balance for the best modified sharpe ratio. For my trading I had to use 8 very good models to get the number up to 2.58 so that's why I trade against 8 models. In our 3 model example I ran my program and here were the results: combine model 1 & 2 Best modified sharpe ratio 1.395 using 1 unit of model 2 to 1.5 units of model 1 combine model 1 & 3 Best modified sharpe ratio 1.166 using 1 unit of model 1 and 1.333 of model 3 combine model 2 & 3 Best modified sharpe ratio 1.057 using 1 unit of model 2 and 2.04 units of model 3 combine model 1 & 2 & 3 Best modified sharpe ratio 1.461 using 1 unit of model 2, 1.26 units of model 1, and 1.08 units of model 3 From this test I can see I should be trading all the models in the ratio shown to achieve the maximum consistency. At 1.461 the zscore translates into 85.56% winning months (about 1 3/4 losing months per-year). Not up to the 99% level but a definite improvement over any combination of two systems. If I had a fourth model I could do the same test and see if it should be included, and if so, what the optimal ratio should be.
I don't look for a market for a system. Each system I've developed is targeting some behavior. If that behavior is present in multiple markets, then I could test it to see if my system captures the behavior better than random. If so, I'd just trade it on that market and check to make sure the behavior was persistent. For instance I have a volatility breakout model that I've used successfully in the SP market. I tested it against the DAX market and found the edge (ability to capture profits at better than random), was better in the DAX than the SP. I've been trading the DAX with it since then and it's done very well. Only thing I don't like is getting up in the middle of the night to trade. Every model I've worked on has gone through the same process. Look at the behavior's present in a market, characterize them by creating a rule and checking the fit until all behavior's are noted. Then start looking to see if there is a component to the behavior that is non-random. If so, develop a system to mine it and create a way to monitor the behavior to ensure it's persistent over time. For example, one of the behavior's widely known is the trend day in the SP market. It can be identified just by visually inspecting a chart. I characterized it as a low/high within 10% of the low/high of the day and the close within 20% of the high/low of the day. With the definition I can see how many of these days have persisted over the years (averages about 25 days per-year). Then I can see if there is a way to identify these days in advance (realizing I'm going to also be capturing some false days as well).
You already know, as liquidity goes down time in trade must go up. Applied to your situation, you might have to create a smaller pool of securities that have sufficient liquidity for your current strategy. Then create another method that uses a longer holding time for the less liquid securities. This way you'd be diversifying by method, time, and securities. You could treat each method and the secuities as a single pool and test the correlation between the two strategies. If the srategies were found to be non-correlated you could take on larger size and reap the benfits of diversification.
Acrary, When you would add a non-correlated instrument to your project, like the EUR/USD since the correlation between the S&P and the EUR/USD is only .11 (+or - I don't remember), you would do the same research on that instrument and probably you will also have the best modified sharpe ratio when you make a combination of the 3 models. And then look for the optimal weighting between (1&2&3 -S&P) and (1&2&3 - EUR/USD). I am pretty sure you will find an even better sharpe ratio. It's easier to find a non-correlated instrument than a fourth model.. just a thought
I don't know what turtle trader uses. I'm sure my method is pretty simplistic. For daytrading I just calculate the range (high - low), then average it for the past ten days. I use ten because I want my model to cut back on size pretty quickly if the volatility jumps. Then I divide the highest historical 10 day volatility (approx. 48 pts.) by the current volatility (ex. 8 pts) to come up with a multiplier (ex. 6). The model would then apply 6 contracts for the next trade. This is not the final size used to trade. It's just used to adjust the model for volatilty levels so I measure one period against another without volatilty being a consideration. By doing so, I can see if the same level of opportunites persist from period to period. I can also use these normalized trades to feed into money management models as well as Monte Carlo tests to estimate future performance and drawdowns. If you were to use trades from say 2000 and 2004 for the SP market in a Monte Carlo test without normalizing volatility you'd get a very distorted estimate of future performance.
Quote from onelot: acrary, regarding the correlation of multiple instruments versus model correlations: after finding no correlation between instruments should we look at the instruments as separate models if we are using multiple instruments to one model in order to carry on the sharpe work? i'm not sure if that's what you were implying here in response to virgins post: so for instance, instead of the spread sheet comparing model1>model2>model3 to one instrument it could compare instrumentX>instrumentY>instrumentZ to one model, no?... as opposed to only measuring say the correlation of the 30 day price average between instruments. Yes, that's the idea. You'd just have to make sure the periods were identical (in this case monthly). If you were doing long term trendfollowing you might have to switch to quarterly or longer timeframes for comparison due to the fewer trades. just to make sure i understand the big picture, this is all being done to increase frequency in the desired profitable timeperiod, correct? so if i'm understanding, sub-par models tested individually with low frequency can be morphed into an above-par model when combined with other non-correlated sub-par models (assuming they're not too sub-par), thus increasing frequency, consistency, and lowering the need for a higher profit factor? thank you for presenting the information the way you did, i would not have made that connection otherwise (assuming i'm on the right track). fascinating. Yes, you understand it perfectly. If you ever trade money for others the first thing they want to know is that you've done everything possible to avoid losing periods. If you tell them "my performance period is one month and I've planned my trading so that there is a 95% chance of a profit within any single period", they'll be happy to hear what else you have to say. You'll also see by working hard on consistency that you have better money management options available to you than "risk no more than 2% of capital on any trade".
That's a great link. Unfortuantely the z scores are computing the area below z incorrectly in the top part. For instance, a z score of 1.96 should indicate a area of .95. The site figures it as .975.
If you've learned this and forgotten it, then no doubt you've replaced it with something better. Please contribute anything you're willing to divulge that goes beyond the material. Obviously you're more advanced than me and I would be grateful to any clues as to what additional direction(s) I should be checking out. Also, if you've seen this stuff in any book, please let us know the name of the book. I know I would love to read about it from another's perspective.