The Gateway › Forums › Science › Computer Science › Using ChatGTP › Reply To: Using ChatGTP
Posted for vandermude:
Terry:
You are absolutely right – the point you raise about meaning gets to the heart of what it is to think, let alone think at an intelligent level. I have known about this for decades and have thought a lot about exactly what you are saying. It’s just that when you bat out a response to an issue, you can’t always cover all of the bases and sometimes some issue gets forgotten. Thank you for bringing this up. Here is my response.
Since I have been working in the field of AI since 1973 I have been dismayed at the lack of progress in creating a true Artificial Intelligence (what is known now as an Artificial General Intelligence – AGI) despite the fact that we have created many “smart” machines. But the metaphor I use is that current AI, including ChatGPT and its cousins are like Vermeer’s Girl with the Pearl Earring: they are very lifelike, but they are not the girl. Despite this, I keep the faith, but I have to admit that sometimes I don’t feel that much different from a fundamentalist who is waitng on the Second Coming of Christ.
I have known since the late 1970s that a true AGI cannot be constructed unless it has a rich emotional life. That is the basis of meaning.
Now, one of the problems I have which you astutely pointed out is that I was using the word meaning in one of two common usages, and that the other common meaning is actually more important. The usage I had in mind is on the level of Quo Vadis: where are you going? What do you want to make of your life? But the more fundamental problem is to make sense and meaning out of the individual sensations, perceptions and feelings that make up both our mental and physical reality instant by instant.
This dual meaning problem is similar to the dual usage of the word consciousness that Stan Klein and I would debate. Quite often, he would consider consciousness as that awareness of the world around us and our inner needs that is shared by all living things all the way down to the simplest one-celled bacteria. But when I was thinking of conscousness, I was thinking of that self-awareness that I believe only we as humans have, and only about two-thirds of the day, and only starting after about three years of age.
But to get back to meaning. I was referring to meaning in the sense of what am I to make of my life? Or, if I am a baby AGI, just in the process of becoming self-aware, what do I want to be when I grow up? The realization that the universe as a whole is meaningless leads many people when contemplating this question into a morass of existential angst. Other people kind of pass on this question and live in a kind of uncomfortable agnostic state about the whole issue. I believe that once an AGI comes to be, this is an issue it will have to address.
But the sense of meaning that you are discussing is more fundamental. It is a priori to the creation of an AGI. Here is my take on the issue.
So, to revisit your summary of the issue:
“[Menaing] is, instead, something bestowed on things and events by living things that care about surviving. Meaning depends on valences ascribed to things and circumstances due to the pleasantness or unpleasantness of the physical sensations they provoke. For living things meaning, at its most fundamental level, is relative to the threat to or support for their existence. This meaning is an interpretation of circumstances made by an organism according to inherited instinctive valences or valences acquired through past experiences. The organism then experiences the outcome of the interpretation (supportive, threatening, or neutral) as physically felt sensations triggered by the interpretation.”
My first observation is that this question of meaning is something that Terry Deacon brings up in Incomplete Nature and Jeremy Sherman discusses in Neither Ghost Nor Machine. They get at the notion that once life has arisen from non-life, physical processes acquired meaning in terms of these living things, or as you say “meaning, at its most fundamental level, is relative to the threat to or support for their existence”. They point out that this is where teleology comes from: “natural selection is indeed a thoroughly non-teleological process. Yet the specific organic processes which this account ignores, and on which it depends, are inextricably bound up with teleological concepts, such as adaptation, function, information and so forth.” [Incomplete Nature]
I make the case that the vague term you use “valence” I want to explicitly use the word “emotion”.
https://en.wikipedia.org/wiki/Valence_(psychology)
“Valence, or hedonic tone, is the affects’ property specifying the intrinsic attractiveness/”good[ness]” (positive valence) or averseness/”bad[ness]” (negative valence) of an object, event, or situation.[1][2] The term also categorises emotions.[2]”
So the valence is based on the inherited emotional responses built into the organism, which change and develop through past experience.
But it all comes down to emotion. Note that when I talk morality, “good and evil” Hume points out that this is based on the emotions, not logic, and he is probably correct.
To make the case that meaning (in the sense of making sense of our senses) is fundamentally based on the emotions, consider that neural nets and our brains are able to process sight and sound. Both a convolutional neural network and the optic nerve trigger on black versus white and perform the task of edge detection. And neural networks are as capable of frequency detection as the cochlea.
Now some of the fundamental emotions are things like fear or hunger. Therefore the sight and sound of a lion is derived from the senses based on fear. And the sight and sound of an apple falling from a tree can be meaningful in terms of hunger.
But what about the sight of a rose or the sound of a song? There are other emotions such as beauty or love or just plain old curiousity.
On the other hand, there are many things that are sensed by never processed or understood because they do not have any emotional response. I would claim that the focus of attention is emotionally based, and that any part of our senory field that has no emotional valence is ignored. like the famous selective attention test, where people do not notice a person walking through a basketball players wearing a gorilla suit.
So here is how emotions are instantiated in a computer program. First of all, the emotions of a computer will be completely different and alien to a human. This was pointed out by Nagel in the article “What Is It Like to Be a Bat?” Each living thing is different.
https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F
A computer, for example, would have no sensation of hunger in the analog sense of a blood sugar level. It might, though, have a great fear of power loss. Its sensations would be mediated through its sensors. This might also mean that different AGIs would have different internal “conscious mental states” (to use Nagel’s term) depending on its input peripherals and its internal processing units. But most computers might probably have in common the sensations of having a memory unit that is empty of filled, or a task queue that has many tasks competing for resources.
Imagining a computer have have emotions is not all that far-fetched. Personally, I have considered most computers up till now (including chatbots) to have as complex a set of conscious mental states as that of an allegator at most. They still have a long way to go until they reach the level of mammals, let alone humans. The thing about the current Large Language Models is that they are great cameleons.
One gedanken experiment that I did back in the 1980s was to imagine the mental state of my MS-DOS computer. And one emotion that was so obvious was that it really did not like getting an “out of paper” signal from its printer. It hated that. This would make it react immeidately and repeatedly until I pulled that thorn out of its printer peripheral paw.
But, then again, we must always keep in mind the injunction: “Do not anthropomorphise computers. They hate it when you do that.”
Nevertheless, he persisted. I can’t help it.
Note that I ended before with reference to Blum and Blum. It turns out that they, too, consider the part that the emotions play in consciousness. So their model for focus of control that binds the computational processes of an AGI together, involves the processing of emotional states in a big way.
If the referenced paper is too technical, I would suggest one of the videos of Lenore or Manuel giving a talk on their work.
From 2018:
Manuel Blum: Towards a Conscious AI
Blum mentions an anecdote from Oliver Sacks that illustrates the relationship between emotions and consciousness.
Or more recent (2021 – I haven’t seen these)
Lenore Blum – A Theoretical Computer Science Perspective on Consciousness
Manuel Blum – Insights from the Conscious Turing Machine