• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    24 hours ago

    That being said, it still wreaks of “CEO Speak.” And trying to find a place to shove AI in.

    I don’t see how this is “shoved in.” Wales identified a situation where Wikipedia’s existing non-AI process doesn’t work well and then realized that adding AI assistance could improve it.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      23 hours ago

      Neither did Wales. Hence, the next part of the article:

      For example, the response suggested the article cite a source that isn’t included in the draft article, and rely on Harvard Business School press releases for other citations, despite Wikipedia policies explicitly defining press releases as non-independent sources that cannot help prove notability, a basic requirement for Wikipedia articles.

      Editors also found that the ChatGPT-generated response Wales shared “has no idea what the difference between” some of these basic Wikipedia policies, like notability (WP:N), verifiability (WP:V), and properly representing minority and more widely held views on subjects in an article (WP:WEIGHT).

      “Something to take into consideration is how newcomers will interpret those answers. If they believe the LLM advice accurately reflects our policies, and it is wrong/inaccurate even 5% of the time, they will learn a skewed version of our policies and might reproduce the unhelpful advice on other pages,” one editor said.

      It doesn’t mean the original process isn’t problematic, or can’t be helpfully augmented with some kind of LLM-generated supplement. But this is like a poster child of a troublesome AI implementation: where a general purpose LLM needs understanding of context it isn’t presented (but the reader assumes it has), where hallucinations have knock-on effects, and where even the founder/CEO of Wikipedia seemingly missed such errors.

      Don’t mistake me for being blanket anti-AI, clearly it’s a tool Wikipedia can use. But the scope has to be narrow, and the problem specific.