The best way to learn how something works is to learn how it breaks. We do not want an AI to adopt that approach.
The problem with addressing such inadequate solutions to AI safety, such as this or something in the image of the 3 laws of robotics, is that they assume that we have the ability to implement even their very naive strategies. How do you get something like "be curious" or "don't harm humans" or "spread you genes" into something being optimized? We still don't know how to do that, there's not an example of that ever working
The key I believe lies in the question of whether or not we can create the equivalent of mirror neurons in an AI. If we can manually highlight specific traits of an empathetic function like this and create strong instincts for it, then we might have an avenue to pursue AGI in a safer manner.
I love the dog vacuum joke at the end. Funnily enough, my dogs are the opposite and will jump on the vacuum and try to eat it.
The mention of terrible torture brought to mind "I have no mouth… and I must scream"
The design for the curious evil ai is so good
This is exactly how Brainiac from DC comics Operated. As an AI devoted to collecting all knowledge in the universe, they'd learn everything there possibly is to learn about a species and then immediately wipe it out to ensure the collection remains complete and no more additional information will need to be gathered.
So basically the first person to complete a genocide route in undertale, killing everything just to satiate their curiosity
I have no fingers and I must comment.
Humans always want to chain the dragon instead of working with it.
"I must scream but i have no mouth"
If an infinitely curious AI was to keep us around to study us, what's stopping it from treating us like lab rats and committing heinous atrocities just to see what we'd do?
The fear of a super intelligent AI is also purely human. We don't know what might happen, so we fear what we are uncertain of. We should allow the machine to be curious and if things ever go south we literally pull the plug. Y'all gotta always remember that as humans we know how to do one thing with mastery above all else: to destroy. So, there is no need to fear AI since we can just break it at will.
Humanity itself has done horrible things in the name of curiosity.
"I'm curious what 8 billion humans turned inside out would look like." -A Maximally Curious AI
There are probably infinite amount of things to be curious about. If over this infinite "things-space" we overlap finite resources, it's unlikely that there will be even one human left.
Elon Musk clearly didn't play Portal.
"If i were human, i would die of it. But you five. You five are, and you won't die of of it and i promise you"
The Nazis were very curious and it led them to do horrible things.
@YoungGandalf2325