Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    22 hours ago

    Depending on what definition you use, chatGPT could be considered to be intelligent.

    • The ability to acquire, understand, and use knowledge.
    • The ability to learn or understand or to deal with new or trying situations.
    • The ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests).
    • The act of understanding.
    • The ability to learn, understand, and make judgments or have opinions that are based on reason.
    • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.