This is one of the most informative and most of all fun easy to watch and understand lectures I've ever watched. I wish you were my professor! Please keep posting them. Thanks!
Incredible talk! Very informative and an excellent intro to RLHF π€
Your lecture brought back memories of a book I read in the 1980s called "Introduction to Neural Networks" by Jeannette Stanley. As I'm sure you're aware, Neural Network Artificial Intelligence scientific literature goes back many decades. I picked up my old 1988 2nd Edition of that book today and was amused at how advanced neural network theory was more than years agoβand dismayed by how many years have passed only to bring AI to what many experts today have said is still in a nascent stage of development.
good lecture. its not easy to explain stuff in 39mins like this. thanks.
great materials, thank you so much prof Natasha!
Ma'am, I've subscribed and I will always listen to your lectures. As someone who works in AI, your knowledge and presentation skills are through the roof. I'm glad my feed recommended this video.
One of the best lectures on RL
I really enjoyed and learned. Thank you Professor.
Excellent presentation. Note to viewers - 33:27 Ting sound is from the video. Don't worry, you did not miss a message on your mobile...
my youtube never recommended me such lectures but im glad it did this time.
Highly interesting. Thank you. I am keen to hear how the rewards and penalty models work?. My key interest is that I think the rewards model needs it's own architecture as a symbiotic "lead" agent.
Excellent presentation Natasha and great work with your team, I will check out your papers ππ
Excellent talk.
Reinforcement learning is really cool π
Very interesting talk and I'll admit that most of it goes way over my head. I guess my main concerns are the general ethics of making people trust and think the AI is a real person and that we won't be able to distinguish between AI and real content.
Great π lecture on ChatGPT & AI on general. The Humanity is going through greatest of Challenging & Test of the time. As you mention in the Lecture, we (Humans) collecting all the creativity, informations ( books, text, images, arts, audio, etc-- more data better- all Human creations) to train AI to create an AI ( Basically HUMANS competetors) No body denies or have stopped Human progress from past to now, no matter Good ( medical advancement) or bad ( nuclear weapons)- Humanity has used both to on themselves - medical benefits to deadly weapons on another Humans. The questions arises that.. In challenging times of different human challenges, destruction of earth natural resources, human poverty, various climate & human challenges, income Inequalities and much more bigger problems Humans & planet is facing ... do we (Humans) need to pump billions of dollars, massive human research capital, massive stealing of energy, resources from Human to AI Data Centers to create a Human competitor AI. And last layperson concern(as included in title) No matter, how much resources you pump in AI by stealing from Humanity & its creations, AI cannot Produce the Food for human survival, Babies to continue Human race, beauty of life & death and pay my Bills & TAXES... & research money YOU are standing on..???β€β€πππ
KL Model is like saying "You should only compete against one person, yourself. And measure how far you have come"
A wonderful lecturer. I will love every lecture that's given in the class!
Thanks for the content! It's good to watch something that leaves a room for thought, or just asks questions you can answer for yourself. It's rare to see someone from the audience who isn't dead scared of AI because they know what this is all about. Even though it goes closer and closer to human-like behavior, it's still not there. In order to replicate it, we have to fully understand our own consciousness. At the moment, I think it's like with training pets. No matter how many neural connections we have, all of them can combine and exchange data, but the judgment and reason is not there. Even our domestic animals sometimes surprise us, but it's not because of choice and rationality, it's about the same basic patterns and training. So, I think that it's too soon to talk about ethical problems, like how humans are not needed anymore because of automatization. We might have got one step further in the productivity. If we get to the point where humans are actually not needed on their jobs, we'll have bigger problems than that.
@ScientificHustle