Biggest problem I've encountered with AI is it's tendency to mirror, and tell me what it thinks I want to hear.
Im not really worried about AI, I'm more worried about the people that treat it as if it was perfect, all knowledgeable and didn't make mistakes. I work with people like that...
I’m a developer and I’d say AI hinders me as much as it helps me. It can speed me up by spitting out the tedious, simple code, but when it comes to more complex design patterns, or designing larger pieces of software, it craps the bed. I often find that I’ll go back-and-forth with it for an hour only to realize my old-fashioned flesh-brain is still superior
Computers don't kick our ass at math ... They only do that at calculations ... No computer has ever presented any original mathematical function
I like your channel and I’m a subscriber, but this seemed confused to me. It started out saying LLMs have hit a wall because of some equation that was never explained, and ended quickly with a sudden suggestion that AI/AGI will be a threat to jobs and humanity.
This was ironically like a ChatGPT answer, fluffy and cool on the outside, crumbles in the inside on further inspection.
beyond the super high production value and vox style editing this was just a mess of a script with no real point
This is like watching a smart person giving his absolute best to understand Everything about a complicated topic in a short amount of time then getting burnt out and just sort of winging it by half-assing the rest of the research for the sake of completing the project.
No Veritasium was harmed in the making of this video😂
TLDW: Basically it's a really long video to just say "There isn't enough data on the internet to feed the models."
This is a weird video, seems well researched in some ways and poorly in others. Weirdest bit is he says AI has hit a wall then concludes that AI hasn't hit a wall!??
Honestly I feel like I didn't really learn anything from this.
The video reminds me of a story and quote “There is insufficient data for a meaningful answer.”
I asked "deepseek" how many Rs the word "strawberry" has (with activated reasoning), after counting the Rs about 6x and doubting itself, it came to the conclusion that it has 3 Rs, but it wasn't sure about that in the reasoning... It thought that the word only has 2, but then counted it 6x and got 3... very weird seeing this thing "think"...
0:11 I am using this equation in my business. Thank you.
I feel like this video was false advertisement. It shows an equation implying there's a mathematical reason for LLMs to have a limit, but then instead of showing a mathematical limit actually shows a logistical one - we ran out of easily processable data. And then the usual worries about jobs and ethics and AI overlords, which are not mathematical or engineering considerations at all.
Imagine you put on a blindfold, and I spend 30 days describing my house to you. Miraculously, you stand up and begin navigating my house blindfolded. There are a few things you bump into, and dont know where to find. So we go back to training and spend 1000 years describing what my house is like — this time your ability has improved greatly! Its almost like magic, you navigate my house extremely well. Except ... theres something missing isnt there? I ask you if you can find my kids drawing she taped to the fridge. You walk to the fridge, reach up your hand where you expect the drawing to be, and grab ... not a drawing, but a utility bill (another likely thing to be taped to the fridge.) Did we not train enough? What is the solution to this kind of problem? 10,000 more years of training? 100,000? A mental model can only get you so far. At some point, for you to be able to navigate my house the same way I do, we are going to have to take your blindfold off.
After watchng a sufficient portion of this presenation, it became obvious the publisher puts a flashy style ahead of content. Lots of wasted frames, here.
"We're not a science channel, we won't go deep" proceeds to say a bunch of gobbledegook
@slidebean