@john01112

"i built my own AI"
Looks inside
ChatGPT Wrapper

@Mlogan11

High score for addressing technical points and issues in a 10 minute vid and keeping it flowing and interesting.

@jessoteric

to be fair, "the ceo doesn't know how their product works but other random people do" is hardly a surprising situation

@jeffreysoreff9588

Loved the analogy, that without interpretability, we can't tell the difference between the LLM equivalent of a paint chip and a dented wing.  Well done!

@bgaimur

"I know how AI works" says the guy who asked Chat GPT how AI works

@darkally1235

For much of history science and engineering was empirical - they didn't really understand why something worked the way it did, but they determined ways to reproduce a given behaviour.  Then they developed models and equations which fit those observations and through further experimentation and refinement reached the point where they "understood" how it worked (or at least the limits of their models and equations).  So this situation isn't unusual, just less common in modern times and the computing field in particular.

@TheJango2106

The Golden Gate Bridge thing is amazing. And immediately made me think of how a company could manipulate an ai model to advertise and mislead people.

@0000000Fritz

Your skits are superfunny but this was so informative! Keep doing both! Your content is great :)

@stevecarter8810

How does a car work? 
Engineer: hmm, it's complicated..( considers electromagnetism, battery chemistry, battery management science and software needed to do that, physics of suspension in a corner...).
Lay person: What's complicated? You press the go pedal and wiggle the wheel, jeez

@TheTrienco

I think the title says it perfectly. We know "how" it works, because we designed it. "Why" it works is the big question. "Let's create a really complex neuronal network and train it by adjusting the weights until the training inputs produce the wanted training outputs. Then do it on a massive scale and see what happens. Huh... that actually worked way better than expected. Let's see what else we can train this magical black box to do."

It's not so much a black box because we can't look inside (we can), but because looking inside it is pointless due to the insane scale. Staring at literally billions of weights doesn't help you understanding how/why it does what it does. In the way that looking at a brain doesn't help you understand why someone thinks the way they do, even if you could look close enough to see every single neuron firing.

@Kabluey2011

I like your long form content,  the skits are nice, but I also enjoy hearing the meats and potatoes of software development

@TheNefastor

Essential rule of engineering : if something works but you don't know how, then you don't know that it works. You just have something that LOOKS like it works. That is what disasters are made of.

@ToyKeeper

I made SLMs (small language models) a quarter century ago, and it's pretty easy to understand how they work.  But LLMs?  Black fkn magic.  Mad scientists threw math at a wall until something stuck.

@Sapeidra

1:39 No matrix multiplication :(

@grex2595

I love the last point you were making about knowing whether an AI is making a mistake or intentionally misleading. It reminds me of an AI ethics video I saw sometime back explaining why you can't just put a killswitch on an AI and solve all the problems. Kind of incredible that we're now at a point where the AI is sufficiently intelligent enough that we need to start treating it as an intelligence even if it's not really.

@American_Language

numbers can do some crazy things y'know?

@fijit4

I think the "we don't understand AI" is similar to the difference between the fields of psychology vs. neuroscience. Psychology tells us how a complete human interacts with the world, like what choices they will make in a given environment or how behaviors are shaped by expeirence. Neuroscience, on the otherhand, tells you what specific circuits result in specific output. We know what regions of the brain are responsible for speech because we see what happens when that area is damaged, but that doesn't actually tell us what neurons are connected to each other in what pattern to allow someone the muscle control to say the word "apple"

@andrey2604

This is very profitable for LLM developing companies to normalise the way of thinking "no one knows how it works and it's ok to not understand it". Good for stock prices. Not saying that LLMs are shit, just that understanding the Thing drops that godlike vibe and stops worshipping, which drops its value to.. you know, real value.

@HoodieCatGameDev

Your long form content editing is getting better! It used to be a little overwhelming, but you nailed it in this video.

@andythedishwasher1117

What I'm noticing is that the business community has historically refused to appreciate and respond to the way people naturally learn and develop in situations where their bottom line was at stake. It typically elects to attempt to reprogram human nature to fit the needs of profit. They have tended to just throw more humans at the problem when it doesn't solve fast enough. Sadly, that's the approach that has been adopted so far in the AI boom as well. I'm just hoping the bottom lines will suffer enough that they start paying us to do it right soon.