I suspected Jim Simons also does correlation trading based on what mathematicians call 'action limits' within 'activity networks'. When two financial instruments or asset classes deviate from known means and standard deviations over time, an 'action limit' can be set in the algorithm, for example: changes in the price of a 10 year T-note has shown a strong correlation to the price of copper divided by the price of gold. If a rare event occurs beyond three standard deviations say, calculated by a computer program, then it is highly probable the price of copper will fall and the price of gold will rise so the correlation with the price the T-note regresses toward the mean more. The 'action limit' looks like mu = +/- 2.5 STD/square root of N, where N is the number of values in the sample of copper/gold ratios, say. So the computer will automatically short coper and buy gold at certain times until the action limit is no longer triggered. It has to do with 'critical path analysis' where vertices in the path represent different activities to be performed, as in the computer generating orders to buy, sell, short, etc.
The code " data[ (data["state"] == "up") & (data["state"].shift(-1) == "down") ] " will return rows in the pandas DataFrame where the current state is "up" and the next state (shifted by one position) is "down". It should be "up_to_down" instead
Please keep making more videos about reinforcement learning concepts this is amazing, no on else on youtube is breaking down these concepts as gracefully as you just did, phenomenal stuff man. Thank you
The mind-blowing video opened my mind to the quant model trading in terms of application to the real world. I've read a lot about it but never had this insightful explanation.
Thanks Quant, was a very informative video! I propose 2 corrections and 1 recommendation - 1. CORRECTION: shift(-1) actually takes you to NEXT element in the series. So in your program, you need to rename series up_to_down as down_to_up and vice versa. 2. CORRECTION: The last date of 'data' series doesn't have a future date to compare against. So real count of UP/DOWN days is 1 less than the original series length. Just check if last day state is UP or DOWN and subtract 1 from the initial length obtained. 3. RECOMMENDATION: English (and most languages) are read left to right. So Transpose the transition matrix you have defined, so we consider the row-indices are current day states & column-indices as next date states.
I know you wanted to keep it simple for the users but you back tested the strategy on the same sample on which you have calculated odds.. in reality, the sample is continuously changing and it might not behave in the same manner in future with same odds Greay explanation for Markov odd through... One of the finest segment of this video
Good thing I took linear algebra. Markov chains was the one thing I actually learned and enjoyed learning. Thank you for the explanation and coding walk through.
a youtuber who actually reads books ;___; finally I'm home
Ok, i'm gonna be a quant, you convinced me. Gratest video of all times.
i like to marcov process , he is defining the probability for the price to go same origin , we can apply this for polynomial regression , as the price has good probability to get back to middle line.
Beautiful work. Now we as your audience can help optimise the code and share findings. For example there is no need to calculate up_to_up and then calculate up_to_down, simple statistics allows us to perform this instead Probability(up_to_down) = 1 - Prob(up_to_up). So if you calculate one, you know the other. Bayesian statistics.
It’s cool to notice that this mean reverting strategy looks to be performing better under higher volatility conditions.
What a fantastic expansion of the Markov process thank you so much!
Incredible explanation about the most powerful strategies of Most successful trader of the world 🌎
never thought that Markov model would be deployed in trading. have read the Hidden Markov chain long ago
High quality content. Love this.
RIP Legend ❤
Question: At some point it is stated that Markov chains do not care about what the history of states or the previous state was, but I feel like this is contradicted by then showing a model where we check if the past 3 days have been loss days. What am I not understanding? Do we consider "4 days of consecutive loss" to be a single state?
Markov Decision Process (MDP) must satisfy Markov property which states as follows: the action taken at each sate is independent of tall previous states resulting in current state.
@quantprogram