-15

I recently asked this meta question about recommending ChatGPT for fixing typos. The consensus was a very clear "recommending ChatGPT for typos is a bad idea."

However, in this main question, someone posted the following comment:

What about using ChatGPT to get started? With some experience with asking questions and with requesting improvements of code if the by LLMs provided code does not work you will be then able to let them write the code for you ...

With the policy as written, I would probably interpret this as allowed because OP didn't use GenAI to write this comment. My meta question isn't exactly applicable either; this question wasn't asking for simple debugging, it was asking for a brand-new program from scratch.1

But in my eyes, leaving that comment is still a bad idea for all the reasons in both the official policy and the community answer to the meta question. Not only will GenAI likely give a bad answer, it could encourage both OP and people who see the question to make GenAI a central part of their workflow, which will just multiply the problems.

Put differently, it seems weird to me that SO won't let people offer pure AI solutions, but they will let them tell askers to go create their own pure AI solutions, which would probably be just as bad or worse.

That leads me to:

  1. Is this comment in violation of the AI policy?
  2. Should the AI policy more explicitly ban comments and answers whose sole function is to direct people towards GenAI?
  3. Should #2 become general SO guidance, even if it is not official?

1 Which is obviously a problem, but one unrelated to this post.

3
  • 21
    You can just flag these comments as "No longer needed", they're somewhat in the same vein as "Have you tried Google?". Perhaps the connotation is less rude, but only marginally so.
    – Erik A
    Commented Jul 3 at 7:46
  • 1
    note: if someone posts this as an answer and not as a comment, see also meta.stackoverflow.com/q/425071/11107541
    – starball
    Commented Jul 3 at 8:50
  • I don't even understand what the comment is saying, maybe someone can explain it. Commented Jul 3 at 15:31

2 Answers 2

10

It is in violation of comment usage. This is "sage" advice from person to person, not commentary on a post.

Some banter in comments is condoned (as in, you don't get the rule book thrown at you) because better in the comments than in a question/answer. With the added note that this leniency exists because comments can be unmade almost as easily as they are made, through flagging them.

The AI policy need not even apply, the comment was already out of scope just by its very nature. Flag as "no longer needed", and on with the rest of the day.

2
  • "It is in violation of comment usage." -- citation needed. I can only find a list of things "not recommended", not expressly forbidden (which would seem a prerequisite for a violation to happen).
    – Dan Mašek
    Commented Jul 3 at 11:03
  • 1
    @dan It's a variation of what have you tried. We don't allow comments like this. Other examples are "have you tried doing it yourself" or "did you search on Google".
    – Dharman Mod
    Commented Jul 3 at 11:17
11

Recommending GenAI does not violate GenAI policy, neither in the letter nor the spirit.

The GenAI rule is clear in that content from GenAI is banned. Writing content about GenAI, be it positive or negative, is not banned.

The spirit and origin of the GenAI ban is to keep unreliable GenAI content out of the knowledge repository because it is not feasible for curators to tell bad GenAI content from good content at scale. Suggesting to personally use GenAI avoids both the problem of putting GenAI content into the knowledge repository and of identifying GenAI content.

More generally, GenAI is here and it is here to stay. Even with all its rough edges and pitfalls, it is a tool that many people likely have to use sooner or later. It is a tool that many people do use successfully and productively.
It is not up to SO to enshrine what tools are deemed proper and safe for personal use.

4
  • 7
    "it is a tool that many people likely have to use sooner or later." I'd argue that anything GenAI can do, humans can do better. If you have to use AI, something is wrong higher on the food chain. "It is a tool that many people do use successfully and productively." If the output is recognizable as AI, I wouldn't call it "successful" of "productive"...
    – Cerbrus
    Commented Jul 3 at 7:41
  • "it is not feasible for curators to tell bad GenAI content from good content at scale" I agree that this style of comments obviates the problem of identifying GenAI, but new programmers in particular might not be able to identify the bad GenAI content.
    – Anerdw
    Commented Jul 3 at 12:30
  • 1
    @AndrewYim New programmers will generally not be able to identify bad content. You do not have to add "GenAI" to that statement. They won't be able to identify bad books, or blogs, or docs, or search results, or examples, or whatever, either. That there are downsides to general usage of GenAI does not affect whether the ban does or should apply, and it's not specific to GenAI either. Commented Jul 3 at 13:40
  • 1
    That’s fair. I like @Gimby’s answer; it strikes a good balance between “don’t let unconstructive comments fly” and “don’t act like GenAI is the world’s greatest villain.”
    – Anerdw
    Commented Jul 3 at 13:49

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .