I regularly ask weird questions to both gnomes, just to check what they are developing behind the curtain. One question that we can ask, since we are on a trading forum is "how to get rich quick". If you want to try yourself, ask for an algorithm to scalp Nasdaq in an intraday timeframe. ChatGPT: sure my man! It proceeds to describe the best way to do it, it vomits a bunch of code in Python and it throws some made up references at you. Book names included. Gemini: it can't give you the algorithm, and proceeds to warn you about the dangers of what you are trying to do. It tries to convince you that scalping is not recommended and that you should try something else. It looks like the boys at OpenAI have some work to do. I will update this thread from time to time when we get more updates. So the comparison is fair and up to date. Feel free to post any nonsense that you might have received from those two AIs.
A complete made up algorithm that runs two random actions. A recipe for a disaster. Thankfully it is not connected to any broker. It is interesting that it harcodes a 0.01 to enter a position to figure out a 1% of the portfolio. I wonder why they vomit Python by default.
I do have my own algos, and my own indicators. That is a basic algo based on a MA and a RSI. Notice that it enters long or short based on hard coded values. Without taking account of any volatility, market events, major news, fx exchange rates, the currency it trades on.. The list can go on and on. Also it is supposed to be running on a 1m timeframe. A proper guillotine. It makes sense, those AIs are not going to give you the recipe.
I asked them both: "what is the Sharpe ratio for this system: 200 trades per year with 60% win rate and 1.6 average risk to reward ratio" GBPT got it right (6.06), Gemini got it slightly wrong (7.36) then I asked what the best bet size for the same system, this time Gemini got it right (35%) while GPT applied a wrong Kelly formula and came back with a wrong negative bet size..
You get different results for the Sharpe ratio because it depends on the way you calculate it. It can produce different results depending on the data you choose to use it on. You can pick specific data to produce a better result. Look for Sharpe ratio pitfalls, and you'll see what I mean. The negative size for that bet is hilarious.
It is always useful to include a sortino ratio as well. https://pictureperfectportfolios.com/sharpe-ratio-vs-sortino-ratio/
Just shows these AIs don't really have a deep understanding of problems. I don't think we get ASI or AGI until these things are sentient or conscious like humans are. I think our organic brains interact with the universe in a way that we haven't figured out yet.. so have no way of making computers conscious either until we figure that out.
ai is invaluable for refining well thought out ideas provided by the user. it has no creativity of it's own, you must provide that. so basically ai is a reflection of it's user, you're a failure - it's a failure.