@Eckster

I've been shocked at how broadly useful Monte Carlo approaches are in general.

I remember one problem I spent weeks figuring out the correct way to solve an issue, by that point it had been so convoluted to figure out, I decided writing a Monte Carlo simulation to verify I hadn't made a mistake would be smart.

The simulation got the exact same results out to three decimal points, and took about 10 minutes to write.

The other great thing about Monte Carlo simulations is they make all of your assumptions exceedingly clear, while equations tend to obfuscate your assumptions.

@DrHelios9

I must say, I'm a MBBS student in india and I love statistics and have been meddling with it for a long time now. Your videos bring me joy and calmness. This is because the finess with which you teach these topics is pure magic. For a geek like me this is heaven 🪄 
Love you very normal and please keep doing what you do.

@marcusstoica

Yes. I found this method myself after being unsatisfied with traditional power analysis. What's nice is how flexible it is, and how it can be used to quantify and challenge assumptions you have about your data / population.

@gaussology

How are these videos so well made?!

@bingobongo131

this video comes exactly at the right time for me as I'm trying to run a power analysis for maximum likelihood fitted sigmoid functions and I was really running out of ideas :))

@IamJacksHeartCA

Undergrad in math here, love this!

@YaofuZhou

Yup, this is standard practice in particle physics. Eventually the technicality boils down to the modeling of the physical process being investigated, which may involve hundreds of gigabytes of equations. One of the reasons that this is necessary is that there can be signal-background interference in the particle physics processes. What also makes it extra worth it is that the same MC simulation will be used again during data analysis when the actual data collection reaches a checkpoint.

Hopefully, the day-to-day business applications do not often involves complex modeling, and formulae for rough estimations may still be the most economic, especially when the signal and background do not interfere significantly. However, when heavy machiniary, such as MC simulation based on complex model is built, its value can exceed mere advising on sample size. For example. After the statistical analysis with real-life data is done, if the business wants to improve its operation, the model and simulation can be adjusted to provide outlooks for the improvements being considered.

@santiagodm3483

I love your videos!!
When i was thinking about creating an statistical test, I thought about doing the same to find out how powerful my test could be!

@deltax7159

great video man. really enjoy brushing up on my skills via your channel.

@ronaldjensen2948

This is similar to a method I use to show why we "fail to reject the null" instead of just rejecting it.  If we change the criteria from the confidence interval not including the null to simply the p-value, then plot the returned sims as a histogram, we see when the null is actually true the p-value is simply a uniform random variable. The "falser" the null becomes, the more right tailed our p-value distribution becomes.

library("foreach")
sims = foreach(i=1:10000, combine = c) %do% {
  groupA = rnorm(30, mean=0, sd=1)
  groupB = rnorm(30, mean=0.125, sd=1)
  
  test = t.test(groupA, groupB, conf.level = 0.95)
  result = test$p.value
}

hist(unlist(sims), freq = FALSE)

@Nino21370

This channel is so underrated🔥

@galenseilis

One plus to the mathematical formulae (which are not always equations but sometimes also inequations) is that they are computationally fast. A Monte Carlo simulation requires more electrical power than most formulae. The downside of the formulae is primarily that they can be very technical to obtain in the first place and they are only known to be valid under the assumptions they were derived. What's the electrical power cost of spending some length of time working on a formula? I don't know.

@AllemandInstable

I personally still prefer deriving the sample size needed for my estimators from concentration bounds given a certain level of control, which makes more intuitive sense to me. But I also like having other tools in my belt, so thank you for the video, great as usual 😀

@anne-katherine1169

Hey! I saw that simulations are used to estimate sample size for mixed models too, but it seemed a bit more complex. If you'd like to make a video on that, it would be super super useful :)

@ronbally2312

Just one step away from using a Bayesian approach :-)

@_r_ma_

Very helpful, thank you! In your code you should replace the magrittr pipe:  %>% with the new native pipe in R:  |> 
Just a thought for future videos, so that no one gets hung up with an error that "%>% doesn't exist" if they don't load the tidyverse.

@innerbloomset

The hard part is that you don't really know the true effect, and it heavily affects the sample size you need to get the same confidence

@nicksamek12

You make good videos. Keep it up!

@UnPureMaddness

This video is a blessing.

@RinoLovreglio

Beautiful video!
I believe one of the key issues for a power analysis is the selection of a reasonable effect size. What's your suggestion?