@ejosh3420

This guy has saved the life of many data scientists. And his explanations are easy to understand!

@yosepkim7940

I love the high level introduction concepts. for coding and fine-tuning starts at 1:52:00

@Akash-hq3gs

Play at 1.5x, you won't miss anything. Oh btw thats an example of quantization. Feel free to skip first 30 minutes if you understand this.

@sameerprajapati9467

[0,1,2,........,1000] -> asymmetric qunt
[-15,-14,.............1000] -> asymmetric qunt

@nyxshadow121

thank you krish sir, recently i am working with different different llm models, and i am trying to build my own llm model for specific domain, you helped me a lot things with learning through this video, thank you for this beautiful tutorial and guidance. lots of love from Nepal.

@itonlygetsbetterfromherehu4289

this was the best course i have seen when it comes to fine Tunning. I have already fine tuned 5 models before this video and now this time next year it should be 500. Only because you have streamlined fine tunning for me.

@dbiswas

This is a amzing tutorial end to end and if someone wants to learn the fine tuning concepts with hands on lab. Thanks Kris !

@rickfuzzy

Quantisation is such a simple thing.  Can’t believe the time being spent on it. There are videos out there which are far more effective for your time

@rojanocachofernanda8230

thanks to you i  finished my masters degree <3

@lokeishdesaichetty1198

I think Free Code Camp should use AI to remove the repeated sections of topic in the video, this will help the viewers. Currently you are stitching multiple youtube videos to make as a big video. But there are many repetition of things are there in the video.

@egonkirchof

I don' t understand why people call pre training the process of training the model for the first time. Pre training is training and fine tuning is a second training.

@dendi1076

I have a job interview tomorrow to get a genAI job, I barely know anything about it, but  I am going through this tonight right now, if I get the job tomorrow I will come back and post a thank you.

@hyperstructured

If we convert 32 bits into 8 bits the calculation will happen a little bit much more quicker.

This guy is a legend

@thecolonyengineer

which app is he using for writing notes ?

@lwjunior2

He is one of the best teachers. Stop spreading lies

@timmydotlife

Thank you for sacrificing your hair to teach us 🙏

@adriankk

Thanks for the video! Do you have more on this topic?

@bhaskarmondal7461

Thank you soo much, I was looking for something just like this for the past few days !

@mmd_k1995

Did you cover the "out of scope answering restrictions" for fine tuning LLMs in this video?❤

@alokomprakashhbhandari8090

Hi Krish , I think one correction needed , at 1.27 you said anything multiplied by 1 is 1 and anything multiplied by -1 is -1. Please can you correct that statement. Just shared input as you are doing great job , so wanted to make sure it does not have unknown mistakes. Thanks for video