pls continue to make vids they tickle my brain
I think AGI would be a combination of tests on diverse areas like assembling IKEA furniture, making coffee, passing exams and showing that it is able to understand humor, for example.
I think the new definition is if it can do most jobs a person can.
I came here to learn about what is AGI. I leave out of here still without a clue what AGI is... lol 😂ðŸ˜
Solid video as always, Will. Those "goalposts" you mentioned are likely to continue to move; just because the very concept of "general intelligence" is so relative. Currently, there is no known endpoint. It's a mere thought exercise (which makes me distrust it as the new marketing buzzword of 2024). However, I do fear that some unknown period of years down the line, the conversation may very well evolve past the point of technology being compared to general human intelligence. The term "general intelligence" could be used to refer to the capacities of automated robots that humankind will come into contact with every day. That is to say, for the very sake of redundancy, new AI technology in manufacturing will no longer be designed to surpass the capacity of a 20 year human veteran of the industry; it will be designed to surpass/augment the theoretical AGI you discuss in this video (which we may reach sooner than later). A possible trajectory is that the field will constantly be expanding and building upon itself, so as to render the philosophical discussion of human intelligence moot. This is not to say I'm fearing some cartoonish robot revolution that dominates humankind. But we're already worried about TC going down. How much is the human oversight of advanced automation really worth? Especially with all the buzz surrounding quantum computing and the concept of "Industry 6.0". It, of course, raises a slew of political/economic implications that I will not attempt to touch with a forty foot pole. But I can't help but marvel at the impact of LLMs alone. I know a startling amount of college freshmen/sophomores that have submitted essays entirely written by ChatGPT. We're watching the devaluing of fine arts degrees right before our eyes. It's not unreasonable to think that we will gradually accept AGI (whatever that will come to mean) at our own ultimate expense in the workforce. Our workloads will decrease, but so will our salaries.
Thanks for talking to me like a human! Very cool insight and thoughts.
Thanks for explaining something I don't need to understand.
Just found ur channel like an hour ago and have been binging!! Great content! Any advice for a junior-mid level PHP dev?
I'd say agi is reached once it can reason and 'think' like 99-100% of humans Being able to study and recall, solve math equations, abstract thinking etc AGI would mean every current job could be replaced with it Once this is achieved the next question is..when ASI? Imo 'AI will just be a tool' is something we want to believe because the change will be too huge. ASI would outperform 100% of humans Idk when that happens but I doubt its gonna take that long
AGII will be the key. Doctors, especially judges in law will be much more efficient and unbiased
Intelligence is not what defines humanity, it is desire. Neural networks are the key to human intelligence. Once we have the hardware to support the same amount of neurons in our brain, that machine will be as intelligent (if not more) as us. However the key ingredient is desire. All animals have desire, we have desire and a unique intelligence that surpassed all other species, hence we dominate. However we have no clue where that desire comes from (that why most people have faith). If the AI came to my house and made a cup of coffee because it felt like having a coffee, then we are in serious trouble... Until then AI a just a super tool that will enable us to make more of our desires come true...
May be we should break AGI in to areas. For example if a computer can see medical patients and there diagnoses is better then 99% of Doctors, than we could say that we have AGI in medical diagnostic. But still not have AGI in surgery. We could then say no one should be doing human diagnosing, because that problem has be taken care of. but research and better diagnostic inputs should be invested in.
Reading some parts of Pfeifer & Scheier, Understanding Intelligence and Russell & Norvig, Artificial Intelligence (for a shitty AI course), I think intelligence was touched on, and also by the word definition of intelligence. Anything that can aquire knowledge and apply it is intelligent. That is, birds, cats, dolphins, and potentially insects. A robot which follows instructions isn't really an intelligence, but a robot vacuum arguably is (by same standard as insects). GPT4 is intelligent by any standard. I'm not sure if these books touch on AGI, but by my meaning, AGI refers to the ability to learn any task (kinda requires embodiment). Right now GPT4 likely can't learn every task, even given an embodiment, there may be some tasks which are too complicated to learn. For example starcraft, that is if you translate the game into text, it might never be able to understand the game. Reasons I could think of is that the parameters are too few... But of course, maybe I'm wrong, and GPT4 could learn anything. Having a layer that changes input into something GPT can read is not breaking the terms of AGI, including if there's no real way to translate (then it's translation issue), since the same can be required for a human (e.g. blind person to play for example Street Fighters may need better audio input). How well an AGI performs at these tasks is somewhat arbitrary, but say you could put the limit of that of a mentally challenged human, or IQ < 60-80. Where as ASI (super) would require IQ > 140-150, or even IQ > 200. But that's just my argument. Though I'd say intelligence is more well defined than life, or consciousness.
The next real test is: It's AGI if it can define AGI. Jokes aside I think most would call an intelligent autonomous agent "AGI" if it can learn continuously, recognize it's own mistakes and improve. Show an intelligent human how to assemble Ikea furniture and they will be able to. It doesn't need to be embodied but it should have an internal world model which (probably) needs to be learned either in a virtual 3D space or in the real world as a robot. I'm not aware of a human case where someone has no senses at all. One sense is enough to able to learn a world model. An LLM for example has none of our 5 senses to learn our world model and inferring it from language alone is either impossible because it lacks nuances or too inefficient so it's not surprising that LLMs consistently fail at spatial reasoning tests. A lot of real world problems require an understanding of 3D space though and there is even more implicit information contained in this understanding.
"The greatest shortcoming of the human race is our inability to understand the exponential function." - Prof. Al Bartlett
Man, ask ai what is agi. The answer is clear.
AI took my SWE job last year.
@John.Sarah.Eternal