RNAGateway › Forums › Science › Computer Science › Using ChatGTP
Tagged: Chat-GPT AI meaning
- This topic has 5 replies, 2 voices, and was last updated 5 days, 19 hours ago by
tfindlay.
-
AuthorPosts
-
-
September 17, 2023 at 10:03 am #29234
tfindlay
KeymasterChatGTP is an artificial intelligence program. You ask it questions and it composes “intelligent” answers drawing on a wide range of sources.
https://chat.openai.com/auth/login?iss=https%3A%2F%2Fauth0.openai.com%2F
What are your impressions of this tool?
-
September 18, 2023 at 7:32 pm #29239
tfindlay
KeymasterPosted on behalf of Tony Van der Mude
I have used Chat-GPT at work and found it very helpful. I used it to write a piece of my Python application when I got “writer’s block”. The code had a bug in it, but it got me unstuck.
I plan to use it as the Expert in a version of an Expert system that I developed. The system already uses a Bidirectional Encoder Representations from Transformers (BERT) to do semantic matching in a Natural Language Processing application that I wrote. BERT is similar to a Generative Pre-trained Transformer (the GPT in Chat-GPT).
Note that we have in internally hosted copy of Chat-GPT. I work at Humana health insurance, and we don’t want to rick submitting Personal Health Information to a copy of Chat-GPT outside of our environment.
Here is my opinion of Chat-GPT and other Large Language Models (with commentary from others):
I fully expect that as much as 80% of software jobs will be taken over by Large Language Models in 5 or 10 years – all you do is ask Chat-GPT to write the code.
But I have never been a fan of neural nets since I first saw them in the 1980s. They will never become a truly general system – an Artificial General Intelligence (AGI). What they do is to statistically mirror the whole of human intelligence but they will never transcend it.
Here are some great articles that reflect my skepticism:
A Skeptical Take on the A.I. Revolution
The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?“But they have no actual idea what they are saying or doing. It is bullshit. And I don’t mean bullshit as slang. I mean it in the classic philosophical definition by Harry Frankfurt. It is content that has no real relationship to the truth.”
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
There Is No A.I.
There are ways of controlling the new technology—but first we have to stop mythologizing it.
By Jaron Lanier“If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration… A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics.”
Chomsky’s view is very insightful:
Noam Chomsky: The False Promise of ChatGPT“But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.”
“Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
“True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)…Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.”
“In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.”
Chomsky is right on this. This outlook, though, is surprisingly hopeful. A true AGI will have a moral sense coupled with a rich emotional life because that is the basis for meaning. An intelligence, even a non-human intelligence like a dog, interprets the world meaningfully through its emotions.
Currently, the overwhelming viewpoints in mass media are of AI as machines, even machines that make paper clips – Nick Bostrum’s example of an AI gone amuck.
https://en.wikipedia.org/wiki/Nick_Bostrom
https://en.wikipedia.org/wiki/Instrumental_convergence
“Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”
A system that is just goal-directed is no more than a machine. Bostrom is not describing a true intelligence. He is describing a very complex goal seeking computer system. He is describing the current crop of AI, such as Chat-GPT, and any other AI using the neural net deep-learning paradigm.
But a truly intelligent AI is not a machine. It would be intelligent enough to ask “why” and not proceed if the answer is insufficient – which get to Chomsky’s point. A true AGI would be intelligent enough to ask how many paper clips are enough.
A better metaphor is this: to create a true AGI would be like raising a child. You teach it how to understand and learn, you let it search for meaning in the world, you give it a sense of wonder and beauty and love and fear and anger. And finally you let it eat from “the tree of the knowledge of good and evil.”
If you raise it right, you should have nothing to fear from it than from your sons and daughters. Which, in this imperfect world, is not a guarantee. But, like every parent, you start from a position of hope.
Finally, if you want to see the way that a true AGI will be developed, look into the work of the great computer mathematicians Lenore and Manuel Blum:
A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine
Lenore Blum and Manuel Blum
https://www.pnas.org/doi/10.1073/pnas.2115934119 -
September 19, 2023 at 7:36 am #29240
vandermude
ParticipantApropos this discussion, today’s (2023-09-19) cartoon from Saturday Morning Breakfast Cereal is apropos:
https://www.smbc-comics.com/comic/conscious-6
-
This reply was modified 1 week, 2 days ago by
vandermude. Reason: typo
-
This reply was modified 1 week, 2 days ago by
-
September 22, 2023 at 9:38 am #29274
tfindlay
KeymasterA better metaphor is this: to create a true AGI would be like raising a child. You teach it how to understand and learn, you let it search for meaning in the world, you give it a sense of wonder and beauty and love and fear and anger.
I don’t think meaning is something to be found by searching for it in the world. It is, instead, something bestowed on things and events by living things that care about surviving. Meaning depends on valences ascribed to things and circumstances due to the pleasantness or unpleasantness of the physical sensations they provoke. For living things meaning, at its most fundamental level, is relative to the threat to or support for their existence. This meaning is an interpretation of circumstances made by an organism according to inherited instinctive valences or valences acquired through past experiences. The organism then experiences the outcome of the interpretation (supportive, threatening, or neutral) as physically felt sensations triggered by the interpretation.
I wonder how all of this might be instantiated in programs. Can a program care about its survival (existence) if it is not alive to begin with?
-
This reply was modified 6 days, 17 hours ago by
tfindlay.
-
This reply was modified 6 days, 17 hours ago by
tfindlay.
-
This reply was modified 6 days, 17 hours ago by
-
September 22, 2023 at 8:42 pm #29278
vandermude
ParticipantTerry:
A quick observation. A computer program that is not conscious (or aware) will not care about its survival or existence. On the other hand, a true Artificial General Intelligence would be both conscious in the sense it is aware of itself and its surrounding (which is true of every living thing from a bacteria on up) and it would be conscious in the sense of being self-aware like we humans are after the age of three or so, and during the times we are awake, or we are asleep and dreaming. So an AGI would be a software program that is truly alive. Which ChatGPT is not – it is neither conscious nor alive nor an AGI.
-
This reply was modified 6 days, 6 hours ago by
vandermude.
-
This reply was modified 6 days, 6 hours ago by
-
September 23, 2023 at 7:32 am #29281
tfindlay
KeymasterPosted for vandermude:
Terry:
You are absolutely right – the point you raise about meaning gets to the heart of what it is to think, let alone think at an intelligent level. I have known about this for decades and have thought a lot about exactly what you are saying. It’s just that when you bat out a response to an issue, you can’t always cover all of the bases and sometimes some issue gets forgotten. Thank you for bringing this up. Here is my response.
Since I have been working in the field of AI since 1973 I have been dismayed at the lack of progress in creating a true Artificial Intelligence (what is known now as an Artificial General Intelligence – AGI) despite the fact that we have created many “smart” machines. But the metaphor I use is that current AI, including ChatGPT and its cousins are like Vermeer’s Girl with the Pearl Earring: they are very lifelike, but they are not the girl. Despite this, I keep the faith, but I have to admit that sometimes I don’t feel that much different from a fundamentalist who is waitng on the Second Coming of Christ.
I have known since the late 1970s that a true AGI cannot be constructed unless it has a rich emotional life. That is the basis of meaning.
Now, one of the problems I have which you astutely pointed out is that I was using the word meaning in one of two common usages, and that the other common meaning is actually more important. The usage I had in mind is on the level of Quo Vadis: where are you going? What do you want to make of your life? But the more fundamental problem is to make sense and meaning out of the individual sensations, perceptions and feelings that make up both our mental and physical reality instant by instant.
This dual meaning problem is similar to the dual usage of the word consciousness that Stan Klein and I would debate. Quite often, he would consider consciousness as that awareness of the world around us and our inner needs that is shared by all living things all the way down to the simplest one-celled bacteria. But when I was thinking of conscousness, I was thinking of that self-awareness that I believe only we as humans have, and only about two-thirds of the day, and only starting after about three years of age.
But to get back to meaning. I was referring to meaning in the sense of what am I to make of my life? Or, if I am a baby AGI, just in the process of becoming self-aware, what do I want to be when I grow up? The realization that the universe as a whole is meaningless leads many people when contemplating this question into a morass of existential angst. Other people kind of pass on this question and live in a kind of uncomfortable agnostic state about the whole issue. I believe that once an AGI comes to be, this is an issue it will have to address.
But the sense of meaning that you are discussing is more fundamental. It is a priori to the creation of an AGI. Here is my take on the issue.
So, to revisit your summary of the issue:
“[Menaing] is, instead, something bestowed on things and events by living things that care about surviving. Meaning depends on valences ascribed to things and circumstances due to the pleasantness or unpleasantness of the physical sensations they provoke. For living things meaning, at its most fundamental level, is relative to the threat to or support for their existence. This meaning is an interpretation of circumstances made by an organism according to inherited instinctive valences or valences acquired through past experiences. The organism then experiences the outcome of the interpretation (supportive, threatening, or neutral) as physically felt sensations triggered by the interpretation.”
My first observation is that this question of meaning is something that Terry Deacon brings up in Incomplete Nature and Jeremy Sherman discusses in Neither Ghost Nor Machine. They get at the notion that once life has arisen from non-life, physical processes acquired meaning in terms of these living things, or as you say “meaning, at its most fundamental level, is relative to the threat to or support for their existence”. They point out that this is where teleology comes from: “natural selection is indeed a thoroughly non-teleological process. Yet the specific organic processes which this account ignores, and on which it depends, are inextricably bound up with teleological concepts, such as adaptation, function, information and so forth.” [Incomplete Nature]
I make the case that the vague term you use “valence” I want to explicitly use the word “emotion”.
https://en.wikipedia.org/wiki/Valence_(psychology)
“Valence, or hedonic tone, is the affects’ property specifying the intrinsic attractiveness/”good[ness]” (positive valence) or averseness/”bad[ness]” (negative valence) of an object, event, or situation.[1][2] The term also categorises emotions.[2]”
So the valence is based on the inherited emotional responses built into the organism, which change and develop through past experience.
But it all comes down to emotion. Note that when I talk morality, “good and evil” Hume points out that this is based on the emotions, not logic, and he is probably correct.
To make the case that meaning (in the sense of making sense of our senses) is fundamentally based on the emotions, consider that neural nets and our brains are able to process sight and sound. Both a convolutional neural network and the optic nerve trigger on black versus white and perform the task of edge detection. And neural networks are as capable of frequency detection as the cochlea.
Now some of the fundamental emotions are things like fear or hunger. Therefore the sight and sound of a lion is derived from the senses based on fear. And the sight and sound of an apple falling from a tree can be meaningful in terms of hunger.
But what about the sight of a rose or the sound of a song? There are other emotions such as beauty or love or just plain old curiousity.
On the other hand, there are many things that are sensed by never processed or understood because they do not have any emotional response. I would claim that the focus of attention is emotionally based, and that any part of our senory field that has no emotional valence is ignored. like the famous selective attention test, where people do not notice a person walking through a basketball players wearing a gorilla suit.
So here is how emotions are instantiated in a computer program. First of all, the emotions of a computer will be completely different and alien to a human. This was pointed out by Nagel in the article “What Is It Like to Be a Bat?” Each living thing is different.
https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F
A computer, for example, would have no sensation of hunger in the analog sense of a blood sugar level. It might, though, have a great fear of power loss. Its sensations would be mediated through its sensors. This might also mean that different AGIs would have different internal “conscious mental states” (to use Nagel’s term) depending on its input peripherals and its internal processing units. But most computers might probably have in common the sensations of having a memory unit that is empty of filled, or a task queue that has many tasks competing for resources.
Imagining a computer have have emotions is not all that far-fetched. Personally, I have considered most computers up till now (including chatbots) to have as complex a set of conscious mental states as that of an allegator at most. They still have a long way to go until they reach the level of mammals, let alone humans. The thing about the current Large Language Models is that they are great cameleons.
One gedanken experiment that I did back in the 1980s was to imagine the mental state of my MS-DOS computer. And one emotion that was so obvious was that it really did not like getting an “out of paper” signal from its printer. It hated that. This would make it react immeidately and repeatedly until I pulled that thorn out of its printer peripheral paw.
But, then again, we must always keep in mind the injunction: “Do not anthropomorphise computers. They hate it when you do that.”
Nevertheless, he persisted. I can’t help it.
Note that I ended before with reference to Blum and Blum. It turns out that they, too, consider the part that the emotions play in consciousness. So their model for focus of control that binds the computational processes of an AGI together, involves the processing of emotional states in a big way.
If the referenced paper is too technical, I would suggest one of the videos of Lenore or Manuel giving a talk on their work.
From 2018:
Manuel Blum: Towards a Conscious AIBlum mentions an anecdote from Oliver Sacks that illustrates the relationship between emotions and consciousness.
Or more recent (2021 – I haven’t seen these)
Lenore Blum – A Theoretical Computer Science Perspective on Consciousness
Manuel Blum – Insights from the Conscious Turing Machine
-
-
AuthorPosts
- You must be logged in to reply to this topic.