I graduated from Computer Science in 2017. Back then, the cutting edge of ML were Recurrent Neural Networks, in which I based my thesis. This video (and I'm sure the rest of this series) just allowed me to catch up to years of advancements in so little time. I cannot describe how important your teaching style is to the world. I've been reading articles, blogs, papers on embeddings and these topics for years now and I never got it quite like I got it today. In less than 30 minutes. Imagine a world in which every teacher taught like you. We would save millions and millions of man hours every hour. You truly have something special with this channel and I can only wish more people started imitating you with the same level of quality and care. If only this became the standard. You'd deserve a Noble Prize for propelling the next thoustand Nobel Prizes.
I was trying to understand chatGPT through videos and texts on the Internet. I always said: I wish 3b1b releases a video about it, it's the only way for someone inexperienced to understand, and here it is. Thank you very much for your contributions to youtube!!
The best lecture I have ever seen on the intro to Transformers. These videos complement the book "Build a Large Language Model (From Scratch) - Sebastian Raschka" really well.
Grant casually uploading the best video on Transformers on YouTube
The return of the legend! This series is continuing, that is the best surprise of YouTube, thanks Grant, you have no idea how much the young population of academia is indebted to you.
Your teaching is an incredible way to stimulate my curiosity
The fact that meaning behind tokens is embedded into this 12000 dimensional space, and you get relationships in terms of coordinates and direction, that exists across topics is mind blowing. Like, Japan —> sushi is similar to Germany —> bratwurst is just so darn neat
I have been working on transformers for the past few years and this is the greatest visualization of the underlying computation that I have seen. Your videos never disappoint!!
I am a math teacher and one of my classes is about AI. I am making watching this mini-series a mandatory requirement. This is just what my students need. Thanks for the exceptional quality of the content on your channel.
I don't even know how many times I'm going to rewatch this.
2 years ago I started studying transformers, backpropagation and the attention mechanism. Your videos were a corner stone for my understanding of those concepts! And now, partially thanks to you, I can say: “yeah, relatively smooth to understand”
The genius in what you do is taking complicated concepts and making them easy to digest. That's truly impressive!
I always thought when people in the media say, "NO ONE actually understands how chat GPT works" they were lying, but no one was ever able to explain it in layman's terms regardless. I feel like this video is exactly the kind of digestible info that people need, well done.
This video is gonna blow up. The visualizations will help many people that aren't familiar with NN's or Deep Learning to at least grasp a little bit what is happening under the hood. And with the crazy popularity of LLM's nowadays, this will for sure interest a lot of people
I am a non-AI software engineer and I’ve been watching multiple transformer and LLM talks from OpenAI, Stanford online, NLP PhDs, and even some AI founding researchers. Some with code, some with the encoder-decoder diagram, some with Attention is all you need paper, some with ML histories. Still, visualization helps the best when incorporating everything in mind. It’s just beautiful, and love the way you organize the academic terminologies. Salute to this series 100%!
Writing my first academically published paper on AI rn and I have to say as a engineer in this space, this is one of the most complete and well nuanced explanations of these tools. Gold, nay platinum standard for educational content on this topic for decades to come.
You must turn the linguistic vector math bit into a short. -Japan+sushi+germany=bratwurst is pure gold.
I love how clean and natural the transaction from “the difference between men and women is almost the same as the one between all kinds of gender-related words” to “the difference between Italy and Germany is almost the same as the one from the vector representation of a certain couple of very powerful, influent and somewhat worldwide famous moustached-people that lived in those countries in the 1940s” was. Being italian myself, this is utterly hilarious, even more than it could have been alone. This is a brilliant show of how simple things can be if explained in a very simple way. You hide some details that are tough to explain as they are, building step by step simple analogies that help you through a robust comprehension of the overall topic. And this, my friends, is a brilliant showoff of teaching knowledge at its finests. This man is just perfect for this job. Very good work indeed.
Im a mechanical engineering student, but I code machine learning models for fun. I was telling my girlfriend just last night that your series on dense neural networks is the best to gain an intuitive understanding on the basic architecture of neural networks. You have no idea what a pleasant surprise it was to wake up to this!
@3blue1brown