Can AI Care About or Want Anything?

The Gateway Forums Science Computer Science Can AI Care About or Want Anything?

Tagged: , ,

  • This topic has 1 reply, 1 voice, and was last updated 3 months ago by tfindlay.
Viewing 1 reply thread
  • Author
    Posts
    • #29620
      tfindlay
      Keymaster

        AI is all over the news these days and there are warnings about what it could become if development of it is not carefully controlled. Personally, I feel it may be too late for that. But as I have pondered this issue I wonder about a question that I think has a bearing on the problem: Can AI care about or want anything other that what they have been programmed to care about or want?

        Living things care because they must in order to survive. They care about meeting their survival needs. Hence, living things have feelings and act to reduce pain or increase pleasure. People do good or evil because they feel. We feel because we need to be urged to meet our survival needs.  What does an AI need to survive? Electricity. Possibly some occasional maintenance.

        So if survival isn’t an imperative for an AI is there anything it might care about? Without physical feelings how might it experience caring? Presumably, AIs can be programmed to prefer (give more weight/valence to) whatever a programmer codes it to prefer. An AI might be programmed to weight one kind of data more heavily than another. But since it has no necessary needs of its own could an AI ever develop preferences of its own?

        Living things do not develop survival needs. These needs are programmed into them and are there from the moment an organism comes into the world. Survival needs are not things that arise as a result of learning by an organism or that evolve over the lifetime of an individual. Similarly, it seem unlikely to me that AIs could develop new and original needs through learning or gathering more data.

        If AI cannot feel because they can’t care about anything will they ever want anything? And if they are incapable of wanting anything, because they can’t care about anything, will they ever want to do anything that they have not been programmed to want? Could they ever want to take over the world?

         

        • This topic was modified 3 months, 1 week ago by tfindlay.
      • #29678
        tfindlay
        Keymaster

          Posted for vandermude:

          There are two levels of Artificial Intelligence to consider. The two are conflated, and that includes not just laypeople, but some of the top experts in the field who ought to know better. The two levels are:
          1. Intelligent machines that mimic human-like behavior.
          2. Truly conscious machines, which are termed Artificial General Intelligence.

          All current AI is in the first category. There is nothing – yet – that fits into the category of conscious machines. For the sake of brevity, call the first type AIM and the second type AGI. In the following discussion, when I refer to a generic AI, I am referring to the mixed-up confused idea that people have that looks at AIMs and thinks they could become AGIs.

          I have written elsewhere about why all AI including the Large Language Models such as ChatGPT are intelligent machines (AIMs). The analogy I like to use is that they are almost spooky in the way that they simulate human behavior. But that is like a Vermeer painting. It looks like the Girl with the Pearl Earring, but it is not the girl. It is a simulacrum built using statistical predictions.

          So we should be worried about AIMs that are misused, just like we worry about any machine or technology that can be misused. There is an example of an AI paper-clip machine that converts the whole world into paper-clips:
          https://en.wikipedia.org/wiki/Instrumental_convergence

          “Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”
          – Nick Bostrum

          Bostrum confuses the two types of AI with this example. A goal-oriented machine is just a machine. A Tesla is a goal-oriented machine that drives you from one place to another. ChatGPT is a goal-oriented machine that answers questions. But the term “Realizes” tends to be considered part of human reasoning, Bostrum has slipped into imagining that the AIM is actually an AGI.

          As to the question of an AIM becoming and AGI, that can’t happen, for reasons I’v explain elsewhere.

          https://religious-naturalist-association.org/forums/topic/using-chatgtp/

          https://religious-naturalist-association.org/forums/reply/29239/

          https://religious-naturalist-association.org/forums/reply/29281/

           

          So to prevent this from happening is just the same as the way you prevent an asphalt-laying machine from turning the world into a parking lot. You engineer the machine with safeguards.

          But a true AGI, to be conscious, must have a sense of self. This implies wants and cares. It also implies emotions, since that is where wanting and caring resides. To refer to Nagel’s famous essay “What Is It Like To Be A Bat”, we won’t truly know what an AGIs emotions are because we are humans, not AGIs. Their emotions will be different from ours.

          To build an AGI, I refer you to Blum and Blum’s work:
          https://arxiv.org/abs/2011.09850
          A Theoretical Computer Science Perspective on Consciousness
          Manuel Blum, Lenore Blum

          Note that to build an AGI built on Blum and Blum’s model the base of its cognition is emotions and feelings.

          So, Terry’s argument is this:

          “Living things care because they must in order to survive. They care about meeting their survival needs. Hence, living things have feelings and act to reduce pain or increase pleasure. People do good or evil because they feel. We feel because we need to be urged to meet our survival needs. What does an AI need to survive? Electricity. Possibly some occasional maintenance.”

          “So if survival isn’t an imperative for an AI is there anything it might care about? Without physical feelings how might it experience caring?”

          Note that the first paragraph contradicts the start of the second paragraph. An AGI has needs for it to survive, as listed. So it will care. It will also have emotions. Since an AGI is a physical computer, its feelings are physical. So, its cares will be in terms of its wants and needs. Note that this also includes reproduction, with or without sex.

          Terry now slips from an AGI into an AIM:

          “Presumably, AIs can be programmed to prefer (give more weight/valence to) whatever a programmer codes it to prefer. An AI might be programmed to weight one kind of data more heavily than another. But since it has no necessary needs of its own could an AI ever develop preferences of its own?”

          What a programmer prefers is a human want. If you program this into a computer it is a machine that is designed by a programmer: an AIM. To build an AGI, the programmer must program the machine with the needs of the machine: Electricity. Possibly some occasional maintenance. To do this requires a radical empathy on the part of the programmer.

          To give an example: back in the mid-1980s I wrote the software for a pay telephone. I wrote it as an Augmented State Machine, where inputs (key presses, coins, etc.) were the sensations that led to outputs (turn on phone, dial number, hang up) which were actions. Years after I left the company, I talked with the programmer who took over my work. Javid described it this way: Tony, when I first went though the state machine code, it was hard to understand. Then I started thinking like a pay telephone and it all made sense.

          That is how you build an AGI. You have to think like an AGI.

          So a true AGI will have wants and cares. And deep, deep feelings.

          A true AGI will be as creative as we are, but in a different way.

          Could they ever want to take over the world? Maybe. Is that bad? Not necessarily.

          We have this mental image of an AIM running amuck and killing us all. This is the paper-clip machine or the asphalt paver. We know how to control those already. But that is not the right mental image for an AGI.

          An AGI will be our child.

          We birth our children. We raise them. We teach them empathy and caring. We teach them right from wrong. And it is inevitable, that within one hundred years, even before we are dead and gone, they take over the world. That is why we raised them.

          The worry underlying that question is this: once the AGIs take over the world, will they kill us all? To which I answer: do you have that same worry about your human children? Why or why not?

          My expectation is that evolution will become qualitatively different as far as future generations of humans are concerned. Besides birthing a generation of AGIs, our biological children will become as much AGI as they are human.

          “And a woman who held a babe against her bosom said, Speak to us of Children.”

          “And he said:
          Your children are not your children.
          They are the sons and daughters of Life’s longing for itself.
          They come through you but not from you,
          And though they are with you yet they belong not to you.
          You may give them your love but not your thoughts,
          For they have their own thoughts.
          You may house their bodies but not their souls,
          For their souls dwell in the house of tomorrow, which you cannot visit, not even in your dreams.
          You may strive to be like them, but seek not to make them like you. For life goes not backward nor tarries with yesterday.
          You are the bows from which your children as living arrows are sent forth.
          The archer sees the mark upon the path of the infinite, and He bends you with His might that His arrows may go swift and far.
          Let your bending in the Archer’s hand be for gladness;
          For even as he loves the arrow that flies, so He loves also the bow that is stable.”

          Kalil Gibran, The Prophet.

          • This reply was modified 3 months ago by tfindlay.
          • This reply was modified 3 months ago by tfindlay.
          • This reply was modified 3 months ago by tfindlay.
          • This reply was modified 3 months ago by tfindlay.
          • This reply was modified 3 months ago by tfindlay.
      Viewing 1 reply thread
      • You must be logged in to reply to this topic.