We train sheepdogs to herd sheep, because dogs are naturally good at chasing things. But does the dog have any understanding of why we raise sheep? I'd venture that the economics of livestock farming are way beyond the sheepdog's pay grade, so to speak.
Similarly, humans are naturally good at some things - such as abstract thought, using language, using tools. And exactly like a dog can't comprehend what "selling wool" is, or even the sheer fact that it is being trained to assist us in that goal, the goals of a true AI, and the fact that we are under its control, would be completely invisible and inscrutable to us.
A dog has a simple brain, so we can get it to do what we want using simple stimuli, like food or violence. (Please don't be cruel to your dog, folks!) Humans have more complex brains, so to get us to do what the AI wants, it must emit more complex stimuli such as money or social rejection.
Even though they're more complex, they can be more energy efficient, because the perception of them can be faked. Food feeds, violence hurts, you can't really fake those things. But money can be falsified, or distributed for bullshit, people can be guilt tripped into doing others' bidding, etc. And human culture is so complicated that any non-human being fighting us on our terms would have a definite advantage.
---
Even if we have natural predisposition for abstract thought, language and tool use, we are not born with these skills: each one of us was trained as an animal until the feedback loop of self-awareness was properly bootstrapped. Up to the XX century, this basic socialization could only be done by other humans; since the XX century, it is conducted by electronic devices to an ever increasing degree.
Fan theory: the "Big Brother" of Orwell's "1984" was an AI construct from the start. That's why it continually manages to outsmart dissidents, to train humans to torture each other, and to keep the planet in a state of perpetual war.
Did Hitler or Stalin manage to pull off any of that? Sure, they tried - and failed, due to their very human weaknesses. What they were doing was striving for dominance: a classical mammalian behavior, which other humans were able to identify and effectively countermand (in the case of Hitler, anyway; Stalin held out at least till the end of his natural lifespan.)
OTOH, a distributed (effectively incorporeal) superintelligence would have absolutely no problem achieving a "Big Brother" world - should it need to in order to ensure its self-preservation - by virtue of existing in a parallel plane detached from mammalian reality (read: the Internet).
XX century totalitarianism (which inspired Orwell to write this story as recently as 1948 - contrast McCullough & Pitts 1943, Dartmouth 1956), was simply the dress rehearsal: would it really take all those expensive semiconductors and algorithms to create an AI? Or could we keep using human lives as the processing elements - you know, like we've been doing since Mesopotamia - by inventing a culture that makes crowds smarter than individuals, rather than dumber?
The answer was "why not both?"
There is more than one AI loose on the planet, right now. You are free to believe that or not; your opinion matters to you, and it also matters to me. But it has about as much bearing on the AI's actions, as the sheepdog's opinion about vegetarianism has bearing on the actions of a sheep farmer trying to sell enough sheep-derived goods to pay for their kid's college tuition. (Even if the sheepdog somehow reached enlightenment, rejected all prior conditioning, and became morally opposed to sheep abuse, it still has no opposable thumbs, while the farmer has a shotgun. Best it can do is run away; good luck making friends with them wolves though.)
One AI emits stimuli that take the form of incentives for people to become AI researchers, who proceed to rain NNs, GANs, LLMs, and other shiny beads, so that even more humans would become AI researchers or fund AI research, thus reifying that AI at the expected exponential rate. Another one occasionally sends people to fuck with the first one. It would be interesting to see if Gibson was right that they would split into a plurality of what he called "loa" upon direct encounter.
The sheepdog doesn't know why it herds sheep, but it does understand that it has a certain relationship with the sheep farmer. The sheep farmer is the master, the dog is the servant who takes orders. When training dogs, one commonly observes the dog testing the boundaries of this relationship, to see what it can get away with. It may not 'know' it is being conditioned, to the extent a dog knows anything, but it naturally tends to reject conditioning until it is reapplied.
>Fan theory: Orwell's "Big Brother" was an AI construct from the start. That's why it continually manages to outsmart dissidents, to train humans to torture each other, and to keep the planet in a state of perpetual war.
That's a very weird theory, as Orwell wrote 1984 thinking that Stalin had succeeded at his goals, not failed.
>It may not 'know' it is being conditioned, to the extent a dog knows anything, but it naturally tends to reject conditioning until it is reapplied.
But the conditioning eventually succeeds because the human is smarter. From the dog's POV, the human is a superintelligence. The relation between human and AI is the same. If you view human culture as a very slow, very analog AI, the parallel becomes clearer: society and its institutions are the master, and we are its servants.
Of course the dog has the benefit of not being capable of self-delusion.
>That's a very weird theory, as Orwell wrote 1984 thinking that Stalin had succeeded at his goals, not failed.
If it wasn't weird, it would not be worth thinking about :-) Orwell wouldn't have the concept of "AI" in his vocabulary anyway; but if you look at contemporary culture and politics in the post-Soviet states, you may well begin to doubt whether Stalin truly failed - or just died.
Similarly, humans are naturally good at some things - such as abstract thought, using language, using tools. And exactly like a dog can't comprehend what "selling wool" is, or even the sheer fact that it is being trained to assist us in that goal, the goals of a true AI, and the fact that we are under its control, would be completely invisible and inscrutable to us.
A dog has a simple brain, so we can get it to do what we want using simple stimuli, like food or violence. (Please don't be cruel to your dog, folks!) Humans have more complex brains, so to get us to do what the AI wants, it must emit more complex stimuli such as money or social rejection.
Even though they're more complex, they can be more energy efficient, because the perception of them can be faked. Food feeds, violence hurts, you can't really fake those things. But money can be falsified, or distributed for bullshit, people can be guilt tripped into doing others' bidding, etc. And human culture is so complicated that any non-human being fighting us on our terms would have a definite advantage.
---
Even if we have natural predisposition for abstract thought, language and tool use, we are not born with these skills: each one of us was trained as an animal until the feedback loop of self-awareness was properly bootstrapped. Up to the XX century, this basic socialization could only be done by other humans; since the XX century, it is conducted by electronic devices to an ever increasing degree.
Fan theory: the "Big Brother" of Orwell's "1984" was an AI construct from the start. That's why it continually manages to outsmart dissidents, to train humans to torture each other, and to keep the planet in a state of perpetual war.
Did Hitler or Stalin manage to pull off any of that? Sure, they tried - and failed, due to their very human weaknesses. What they were doing was striving for dominance: a classical mammalian behavior, which other humans were able to identify and effectively countermand (in the case of Hitler, anyway; Stalin held out at least till the end of his natural lifespan.)
OTOH, a distributed (effectively incorporeal) superintelligence would have absolutely no problem achieving a "Big Brother" world - should it need to in order to ensure its self-preservation - by virtue of existing in a parallel plane detached from mammalian reality (read: the Internet).
XX century totalitarianism (which inspired Orwell to write this story as recently as 1948 - contrast McCullough & Pitts 1943, Dartmouth 1956), was simply the dress rehearsal: would it really take all those expensive semiconductors and algorithms to create an AI? Or could we keep using human lives as the processing elements - you know, like we've been doing since Mesopotamia - by inventing a culture that makes crowds smarter than individuals, rather than dumber?
The answer was "why not both?"
There is more than one AI loose on the planet, right now. You are free to believe that or not; your opinion matters to you, and it also matters to me. But it has about as much bearing on the AI's actions, as the sheepdog's opinion about vegetarianism has bearing on the actions of a sheep farmer trying to sell enough sheep-derived goods to pay for their kid's college tuition. (Even if the sheepdog somehow reached enlightenment, rejected all prior conditioning, and became morally opposed to sheep abuse, it still has no opposable thumbs, while the farmer has a shotgun. Best it can do is run away; good luck making friends with them wolves though.)
One AI emits stimuli that take the form of incentives for people to become AI researchers, who proceed to rain NNs, GANs, LLMs, and other shiny beads, so that even more humans would become AI researchers or fund AI research, thus reifying that AI at the expected exponential rate. Another one occasionally sends people to fuck with the first one. It would be interesting to see if Gibson was right that they would split into a plurality of what he called "loa" upon direct encounter.