Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

4
  • 1
    Along the same lines, another problem with GenAI is the unpredictable "drifting". For a recent example, Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds - the original article is paywalled, but is also available here.
    – dxiv
    Commented Jul 20, 2023 at 18:29
  • @dxiv I would suspect GPT was guessing in both cases and the guess just happened to shift over time, so not really an indication of anything. I don't see how a GPT model could determine whether an arbitrary number is prime other than just memorizing it and memorizing common patterns (ends in an odd digit other than 5) Commented Jul 25, 2023 at 5:51
  • @user253751 Who knows? There was this back in March: ChatGPT Gets Its “Wolfram Superpowers”! But that's precisely the point - we don't know, and we shouldn't blindly trust what we don't know. It will be the case someday that an AI may be able to give meaningful answers in general, and math answers in particular, while also being able to explain the step-by-step train of thought. But we are not there, yet, not by a very long shot.
    – dxiv
    Commented Jul 25, 2023 at 6:20
  • 4
    @dxiv an addition: GPT is can often recite algorithms from memory and then execute the steps it just recited, which is pretty cool, but there's no way for it to execute an algorithm without reciting it. If they manage to get that Wolfram connection to work well, it will greatly expand GPT's capabilities, although it will still be a bullshit generator at its core. Commented Jul 25, 2023 at 6:22