Reply To: Using ChatGTP

RNAGateway Forums Science Computer Science Using ChatGTP Reply To: Using ChatGTP

#29239
tfindlay
Keymaster

    Posted on behalf of Tony Van der Mude

    I have used Chat-GPT at work and found it very helpful. I used it to write a piece of my Python application when I got “writer’s block”. The code had a bug in it, but it got me unstuck.

    I plan to use it as the Expert in a version of an Expert system that I developed. The system already uses a Bidirectional Encoder Representations from Transformers (BERT) to do semantic matching in a Natural Language Processing application that I wrote. BERT is similar to a Generative Pre-trained Transformer (the GPT in Chat-GPT).

    Note that we have in internally hosted copy of Chat-GPT. I work at Humana health insurance, and we don’t want to rick submitting Personal Health Information to a copy of Chat-GPT outside of our environment.

    Here is my opinion of Chat-GPT and other Large Language Models (with commentary from others):

    I fully expect that as much as 80% of software jobs will be taken over by Large Language Models in 5 or 10 years – all you do is ask Chat-GPT to write the code.

    But I have never been a fan of neural nets since I first saw them in the 1980s. They will never become a truly general system – an Artificial General Intelligence (AGI). What they do is to statistically mirror the whole of human intelligence but they will never transcend it.

    Here are some great articles that reflect my skepticism:


    A Skeptical Take on the A.I. Revolution
    The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?

    “But they have no actual idea what they are saying or doing. It is bullshit. And I don’t mean bullshit as slang. I mean it in the classic philosophical definition by Harry Frankfurt. It is content that has no real relationship to the truth.”

    https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
    There Is No A.I.
    There are ways of controlling the new technology—but first we have to stop mythologizing it.
    By Jaron Lanier

    “If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration… A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics.”

    Chomsky’s view is very insightful:


    Noam Chomsky: The False Promise of ChatGPT

    “But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.”

    “Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”

    “True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)…Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.”

    “In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.”

    Chomsky is right on this. This outlook, though, is surprisingly hopeful. A true AGI will have a moral sense coupled with a rich emotional life because that is the basis for meaning. An intelligence, even a non-human intelligence like a dog, interprets the world meaningfully through its emotions.

    Currently, the overwhelming viewpoints in mass media are of AI as machines, even machines that make paper clips – Nick Bostrum’s example of an AI gone amuck.

    https://en.wikipedia.org/wiki/Nick_Bostrom

    https://en.wikipedia.org/wiki/Instrumental_convergence

    https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence#Orthogonality_thesis

    “Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

    A system that is just goal-directed is no more than a machine. Bostrom is not describing a true intelligence. He is describing a very complex goal seeking computer system. He is describing the current crop of AI, such as Chat-GPT, and any other AI using the neural net deep-learning paradigm.

    But a truly intelligent AI is not a machine. It would be intelligent enough to ask “why” and not proceed if the answer is insufficient – which get to Chomsky’s point. A true AGI would be intelligent enough to ask how many paper clips are enough.

    A better metaphor is this: to create a true AGI would be like raising a child. You teach it how to understand and learn, you let it search for meaning in the world, you give it a sense of wonder and beauty and love and fear and anger. And finally you let it eat from “the tree of the knowledge of good and evil.”

    If you raise it right, you should have nothing to fear from it than from your sons and daughters. Which, in this imperfect world, is not a guarantee. But, like every parent, you start from a position of hope.

    Finally, if you want to see the way that a true AGI will be developed, look into the work of the great computer mathematicians Lenore and Manuel Blum:

    A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine
    Lenore Blum and Manuel Blum
    https://www.pnas.org/doi/10.1073/pnas.2115934119