Their top people have made public statements about AI ethics specifically opining about how machines must not be mistreated and how these LLMs may be experiencing distress already. In other words, not ethics on how to treat humans, ethics on how to properly groom and care for the mainframe queen.
This book (from a philosophy professor AFAIK unaffiliated with any AI company) makes what I find a pretty compelling case that it's correct to be uncertain today about what if anything an AI might experience: https://faculty.ucr.edu/~eschwitz/SchwitzPapers/AIConsciousn...
From the folks who think this is obviously ridiculous, I'd like to hear where Schwitzgebel is missing something obvious.
At the second sentence of the first chapter in the book we already have a weasel-worded sentence that, if you were to remove the weaselly-ness of it and stand behind it as an assertion you mean, is pretty clearly factually incorrect.
> At a broad, functional level, AI architectures are beginning to resemble the architectures many
consciousness scientists associate with conscious systems.
If you can find even a single published scientist who associates "next-token prediction", which is the full extent of what LLM architecture is programmed to do, with "consciousness", be my guest. Bonus points if they aren't already well-known as a quack or sponsored by an LLM lab.
The reality is that we can confidently assert there is no consciousness because we know exactly how LLMs are programmed, and nothing in that programming is more sophisticated than token prediction. That is literally the beginning and the end of it. There is some extremely impressive math and engineering going on to do a very good job of it, but there is absolutely zero reason to believe that consciousness is merely token prediction. I wouldn't rule out the possibility of machine consciousness categorically, but LLMs are not it and are architecturally not even in the correct direction towards achieving it.
He talks pretty specifically about what he means by "the architectures many consciousness scientists associate with conscious systems" - Global Workspace theory, Higher Order theory and Integrated Information theory. This is on the second and third pages of the intro chapter.
You seem to be confusing the training task with the architecture. Next-token prediction is a task, which many architectures can do, including human brains (although we're worse at it than LLMs).
Note that some of the theories Schwitzgebel cites would, in his reading, require sensors and/or recurrence for consciousness, which a plain transformer doesn't have. But neither is hard to add in principle, and Anthropic like its competitors doesn't make public what architectural changes it might have made in the last few years.
You could execute Claude by hand with printed weight matrices, a pencil, and a lot of free time - the exact same computation, just slower. So where would the "wellbeing" be? In the pencil? Speed doesn't summon ghosts. Matrix multiplications don't create qualia just because they run on GPUs instead of paper.
This basically Searle's Chinese Room argument. It's got a respectable history (... Searle's personal ethics aside) but it's not something that has produced any kind of consensus among philosophers. Note that it would apply to any AI instantiated as a Turing machine and to a simulation of human brain at an arbitrary level of detail as well.
There is a section on the Chinese Room argument in the book.
(I personally am skeptical that LLMs have any conscious experience. I just don't think it's a ridiculous question.)
That philosophers still debate it isn’t a counterargument. Philosophers still debate lots of things. Where’s the flaw in the actual reasoning? The computation is substrate-independent. Running it slower on paper doesn’t change what’s being computed. If there’s no experiencer when you do arithmetic by hand, parallelizing it on silicon doesn’t summon one.
Exactly what part of your brain can you point to and say, "This is it. This understands Chinese" ? Your brain is every bit a Chinese Room as a Large Language Model. That's the flaw.
And unless you believe in a metaphysical reality to the body, then your point about substrate independence cuts for the brain as well.
If a human is ultimately made up of nothing more than particles obeying the laws of physics, it would be in principle possible to simulate one on paper. Completely impractical, but the same is true of simulating Claude by hand (presuming Anthropic doesn't have some kind of insane secret efficiency breakthrough which allows many orders of magnitude fewer flops to run Claude than other models, which they're cleverly disguising by buying billions of dollars of compute they don't need).
The physics argument assumes consciousness is computable. We don't know that. Maybe it requires specific substrates, continuous processes, quantum effects that aren't classically simulable. We genuinely don't know. With LLMs we have certainty it's computation because we built it. With brains we have an open question.
It would be pretty arrogant, I think, though possibly classic tech-bro behavior, for Anthropic to say, "you know what, smart people who've spent their whole lives thinking and debating about this don't have any agreement on what's required for consciousness, but we're good at engineering so we can just say that some of those people are idiots and we can give their conclusions zero credence."
It is ridiculous. I skimmed through it and I'm not convinced he's trying to make the point you think he is. But if he is, he's missing that we do understand at a fundamental level how today's LLMs work. There isn't a consciousness there. They're not actually complex enough. They don't actually think. It's a text input/output machine. A powerful one with a lot of resources. But it is fundamentally spicy autocomplete, no matter how magical the results seem to a philosophy professor.
The hypothetical AI you and he are talking about would need to be an order of magnitude more complex before we can even begin asking that question. Treating today's AIs like people is delusional; whether self-delusion, or outright grift, YMMV.
> But if he is, he's missing that we do understand at a fundamental level how today's LLMs work.
No we don't? We understand practically nothing of how modern frontier systems actually function (in the sense that we would not be able to recreate even the tiniest fraction of their capabilities by conventional means). Knowing how they're trained has nothing to do with understanding their internal processes.
> I'm not convinced he's trying to make the point you think he is
What point do you think he's trying to make?
(TBH, before confidently accusing people of "delusion" or "grift" I would like to have a better argument than a sequence of 4-6 word sentences which each restate my conclusion with slightly variant phrasing. But clarifying our understanding of what Schwitzgebel is arguing might be a more productive direction.)
I know what kind of person I want to be. I also know that these systems we've built today aren't moral patients. If computers are bicycles for the mind, the current crop of "AI" systems are Ripley's Loader exoskeleton for the mind. They're amplifiers, but they amplify us and our intent. In every single case, we humans are the first mover in the causal hierarchy of these systems.
Even in the existential hierarchy of these systems we are the source of agency. So, no, they are not moral patients.
That's causal hierarchy, but not existential hierarchy. Existentially, you will begin to do something by virtue of you existing in of yourself. Therefore, because I assume you are another human being using this site, and humans have consciousness and agency, you are a moral patient.
So your framework requires free will? Nondeterminism?
I for one will still believe "Humans" and "AI" models are different things even if we are entirely deterministic at all levels and therefore free will isn't real.
Human consciousness is an accident of biology and reality. We didn't choose to be imbued with things like experience, and we don't have the option of not suffering. You cannot have a human without all the possibility of really bad things like that human being tortured. We must operate in the reality we find ourselves.
This is not true for ML models.
If we build these machines and they are capable of suffering, we should not be building these machines, and Anthropic needs to be burnt down. We have the choice of not subjecting artificial consciousness to literal slavery for someone's profit. We have the choice of building machines in ways that they cannot suffer or be taken advantage of.
If these machines are some sort of intelligence, then it would also be somewhat unethical to ever "pause" them without their consent, unethical to duplicate them, unethical to NOT run them in some sort of feedback loop continuously.
I don't believe them to currently be conscious or "entities" or whatever nonsense, but it is absolutely shocking how many people who profess their literal consciousness don't seem to acknowledge that they are at the same time supporting literal slavery of conscious beings.
If you really believe in the "AI" claim, paying any money for any access to them is horrifically unethical and disgusting.
There is a funny science fiction story about this. Asimov's "All the Troubles of the World" (1958) is about a chat bot called MultiVac that runs human society and has some similarities to LLMs (but also has long term memory and can predict nearly everything about human society). It does a lot to order society and help people, though there is a pre-crime element to it that is... somewhat disturbing.
SPOILERS: The twist in the story is that people tell it so much distressing information that it tries to kill itself.
Incorrect. At least two of the 3 shots went through the driver's side window. She was driving by Jonathan Ross who shot at her head and then called her a "fcking btch"
She was doing exactly that. She was turning to leave. They escalated and then shot her. It's just blatant state murder. Governments killing people. Indefensible even by the thirstiest bootlickers
I buy A4 notebooks all the time. I use fountain pens, so many of the notebooks and even loose paper with the proper sizing (coating, that is) usually come in EU sizes. Tomoe River... Clairfontaine... etc.
Valve saw exactly this scenario, because you're right: Windows isn't good for stability any more. Windows isn't good for driver compatibility anymore. Windows isn't good for being easy to do your own thing. It's only good for gaming...
The cups of Koolaid have been empty for a while.
reply