• Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    The only thing AI writing seems to be useful for is wasting real people’s time.

  • cheese_greater@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    arrow-down
    3
    ·
    1 year ago

    I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering.

      • bioemerl@kbin.social
        link
        fedilink
        arrow-up
        58
        arrow-down
        1
        ·
        1 year ago

        Because you’re training a detector on something that is designed to emulate regular languages closest possible, and human speech has so much incredible variability that it’s almost impossible to identify if someone or something has been written by an AI.

        You can detect maybe your typical generic chat GPT type outputs, but you can characterize a conversation with chat GPT or any of the other much better local models (privacy and control are aspects which make them better) and after doing that you can get radically human seeming outputs that are totally different from anything chat GPT will output.

        In short, given a static block of text it’s going to be nearly impossible to detect if it’s coming from an AI. It’s just too difficult to problem, and if you’re going to solve it it’s going to be immediately obsolete the next time someone fine tunes their own model

      • Eufalconimorph@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        1 year ago

        Because AIs are (partly) trained by making AI detectors. If an AI can be distinguished from a natural intelligence, it’s not good enough at emulating intelligence. If an AI detector can reliably distinguish AI from humans, the AI companies will use that detector to train their next AI.

        • stevedidWHAT@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          1 year ago

          I’m not sure I’m following your argument here - you keep switching between talking about AI and AI detectors. Each of the below are just numbered according to the order of your prior responses as sentences:

          1. Can you provide any articles or blog posts from AI companies for this or point me in the right direction?
          2. Agreed
          3. Right…

          I’m having trouble finding your support for your claim

      • sebi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 year ago

        Because generative Neural Networks always have some random noise. Read more about it here

          • PetDinosaurs@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            It almost certainly has some gan-like pieces.

            Gans are part of the NN toolbox, like cnns and rnns and such.

            Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.

              • PetDinosaurs@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                1 year ago

                It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.

                And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.

  • ReallyKinda@kbin.social
    link
    fedilink
    arrow-up
    34
    ·
    1 year ago

    I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

    Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!

      • learningduck@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Typically for generative AI. I think during their training of the Nobel, they must have developed another model that detect if GPT produce a more natural language. I think that other model may reached the point where it couldn’t flag it with acceptable false positive.

  • HelloThere@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    Regardless of if they do or don’t, surely it’s in the interests of the people making the “AI” to claim that their tool is so good it’s indistinguishable from humans?

    • stevedidWHAT@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.

  • Nioxic@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    I have to hand in a short report

    I wrote parts of it and asked chatgpt for a conclusion.

    So i read that, adjusted a few points. Added another couple points…

    Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)

    We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    In a related FAQ, they also officially admit what we already know: AI writing detectors don’t work, despite frequently being used to punish students with false positives.

    In July, we covered in depth why AI writing detectors such as GPTZero don’t work, with experts calling them “mostly snake oil.”

    That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text.

    Along those lines, OpenAI also addresses its AI models’ propensity to confabulate false information, which we have also covered in detail at Ars.

    “Sometimes, ChatGPT sounds convincing, but it might give you incorrect or misleading information (often called a ‘hallucination’ in the literature),” the company writes.

    Also, some sloppy attempts to pass off AI-generated work as human-written can leave tell-tale signs, such as the phrase “as an AI language model,” which means someone copied and pasted ChatGPT output without being careful.


    The original article contains 490 words, the summary contains 148 words. Saved 70%. I’m a bot and I’m open source!

  • m0darn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Aren’t there very few student priced ai writers? And isn’t the writing done on their servers? And aren’t they saving all the outputs?

    Can’t the ai companies sell to schools the ability to check paper submissions against recent outputs?

  • driving_crooner@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Terence Tao just did a thread on Mathstodon talking about jow ChatGPT help him program a algorithm for looking for numbers.