Alan, Thank you very much for all the information you give into your posts. It's always interesting and very well explained. I really learnt a lot reading this thread. I think I understood most of your management techniques but there are 2 points that are still not clear for me: 1) when you speak about a model you seem to have 2 different levels of weightings : in your corrrelation files, when you present your models, you've got something like this (example taken from your 2modcor.txt file) : Modified Model # Sharpe Weighting ------- --------- --------- 1 0.7600 1.00 2 0.5454 5.80 Two Model Results Modified 12 Month +1 -1 Model # Sharpe Weighting Roll Cor. Std. Dev. Std. Dev. Std. Dev. ------- --------- ------------- --------- --------- --------- --------- 1 2 1.0291 3.00 1.00 -.1753 0.1655 -.0098 -.3408 In the first part, you show a weighting of 5.80 for model 2 against model 1 and in the second part you show a weighting of 3.00 for model 1 against model 2. I understand the second weighting but not the first one. Is this one a weighting to accomodate against volatility fluctuations? If yes, can you tell us a bit how it works ? 2) when you give examples of Monte Carlo simulations with your money management software, you always show the drawdown as a percentage : (example taken from your 1modmmg.txt file) Confidence Profit Level Result Return Factor DD ---------- ------------ -------- ------ ----- 2% Level 521,033.84 104.2% 2.56 22.0% 4% Level 455,710.59 91.1% 2.37 19.8% 5% Level 434,105.94 86.8% 2.31 19.0% 6% Level 417,854.22 83.6% 2.26 18.4% 8% Level 389,192.06 77.8% 2.17 17.4% What I do not understand is why does the percentage increase with the return. Should'nt it be the other way round : if we have big drawdowns then the results will be small ? Perhaps I did not understand the percentage of what it is ? JC
I'd like to know what items are difficult to understand? I know what I know but I don't have a clue as to the difficulty others may be having in comprehending the material. I know I've skimmed over much stuff but that's because I'm only trying to stimulate ideas. Alan
The "Edge Test" is a concept. The concept is to separate luck from skill in determining the backtested results. How you do define the measurement is up to you. If I had a Cray supercomputer and had 50 trades to test for a year I could probably rank all the different combinations of 50 trades within the 250+ trading days. Since I don't have that kind of computing I have to sample the trades from the pool of potential trades. If I had the desire I'd build a minute by minute database and rank each trade against the exact entry and exit time for every day of the year. Then come up with a weighting scheme for the proportion of longs versus shorts. I'm sure there's many other ways to do the same tests. In the end it all comes down to "does the random selection of trades adequately represent the luck component that I'm measuring against?". If so, then it's worth keeping. If not, then try other ways of accomplishing the same goal. When I've used the test I've disabled the stops and exit with profit strategies so that all I'm testing are my entry and exit at end of day versus random "luck" entry at open and close at end of day. The only purpose of the test is to determine if my entries are a result of luck or skill. Hope this helps. Alan
Just trying to show how to use some flexibility. If you had one model and ten securities in each of two sectors, you could run your model on all ten securities. Then total the results of the sector on a daily basis. Then use correlation of the two sectors to determine how much funding should be weighted to each sector to figure the most consistent results. You could also use two common pairs strategies within each sector such as buy the best performing and fade the worst performing as one pair. The other could be a mean reverting sell the best performing and buy the least performing. You then develop daily totals for each of the two strategies within the sector. Then use the correlation and weighting to determine which of the two pairs strategies gives the most consistent results over time. You could do this for multiple sectors and then do the same correlation tests against the sector results to determine the weighting between sectors using pairs strategies within each sector. The test can be used an endless number of ways. Alan
I'm around. I've tried to start work on the next installment many times in the past two weeks. Each time I've had to drop it because I was including material I currently use to trade. I don't know when I'm going to post again. I'll keep trying to come up with something that I can post, but don't hold your breath. This is the time of year I prefer to trade and not talk about trading. Alan
1). The single model results are not model 2 versus model 1. They are the best weighting from 1-10 in .2 increments based on the best modified sharpe ratio for that particular model. I haven't looked at the single model test closely. In a single model context the results are meaningless since if we only had one model we'd use all our money on that model. 2). I've chosen to sort the results from (top) best to (bottom) worst. The drawdowns are sorted from (top) worst to (bottom) best. I've done this because I prefer to see the relationship between best performing and worst drawdown versus worst performing and best drawdown. In the past I've discovered no linkage between results and drawdown. This is one of the ways I force myself to avoid thinking there is some relationship. The drawdown measures the peak to valley in relation to the account size at the time of the drawdown. Alan
Hello Alan, You have mentioned that when a strategy performs well against random that you trade it until it no longer performs well against random. Rather than dropping a strategy all together, I am considering re-optimizing a strategy's parameters should the current parameters begin to falter. (this would boost it back up way above random for recent data) Is this implied in your discussion here or mentioned elsewhere or do you just drop the stategy? This seems to make sense to me...am I over-curve fitting? Anybody have any thoughts/ideas? Adam
All of the edge based models are developed based on a perceived structural anomaly. For the edge criteria if the longs or shorts drop below 70% of the random trades then it's no longer traded. I've only had one model that this has happened to. I know the rationale for the model was revealed in a trading book in the late 90's. By mid-2001 the model failed the quarterly edge test and I stopped trading it. I also suspend trading any model if it hits the 95% modeled drawdown or the 5% return level from the Monte-Carlo sims. Since I'm no longer doing anything with the model I'll post it on here after the close to show what a edge based model looks like. Alan
This model was based on something I found using the VIX as a "tell" for the future market direction. I started out in trading with OEX index options. I found something there which has been pretty interesting. I've traded this model in the SP market from Jan. 1996 - June 2001. I also traded the ND market with it from 1997 to June 2001. Since then it's been gathering dust but it still makes a few dollars. It was designed to be used with a strategy called "defender". The idea was anytime the market moved negative from the entry price you exited the position. Anytime the market moved positive from the entry price you re-entered the position. After x number of crossovers you increased the size on each crossover into the profitable area to cover transaction costs. Please don't ask me about the logic of this system. If it continues to perform well maybe I'll look to add a version of it back in with my trading. The main thing to notice is you can develop a trading method without using price data as the main source of input and do just fine. The two pieces of data are : data1 : Daily continuous backadjusted futures contract for either the ND or SP market. data2: VIX daily prices