The role of AI-generated posts in TidBITS Talk

I’m sure that you meant well but this is just an AI Slop paste bomb, in my opinion. :roll_eyes:

8 Likes

I shared my attempt to gain insight in what is happening at Apple. Do you disagree with me sharing it, or do you disagree based on the content? For me it made things clearer.

It’s an explanation, but not based on expert opinion or insider knowledge or anything else. It’s no more valuable than finding an analysis by a random stranger with no credentials.

You might agree with it, but there’s no way to judge if it is in any way accurate.

2 Likes

While I like to play around with generative AI’s, I haven’t experimented much with ChatGPT’s products. So based on the excerpt above in the other thread, I’m kind of shocked at the tone ChatGPT uses in its responses. I recently read a Bloomberg feature about generative AI users who have lost touch with reality in ways typical of cult followers and the deeply religious. Now I see how this can happen.


Here’s the Bloomberg story (it does require a free registration to read):
https://www.bloomberg.com/features/2025-openai-chatgpt-chatbot-delusions/

This is a short interview with the journalist who wrote the article (free to view on YouTube):

1 Like

I’ve also noticed that these chatbots seem “eager to please”. The original question had a lot of charged and biased language. I suspect the response keyed off of that.

I suspect you’d get a very different answer if you asked the same question with the opposite bias (e.g., someone who absolutely loves the new icons).

3 Likes

It would have been more valuable had it been your insight, rather than that of ChatGPT.

6 Likes

A few thoughts about @Planetary_Paul’s post, which I’ve left in the previous topic:

  • It’s too long. Almost no one would have written anywhere near that much, and it’s way out of proportion for a discussion forum. I’m going to edit it to use Discourse’s “Hide Details” feature (in the composer’s + menu) to hide it so that topic can continue unimpeded.
  • I have no problem with using AI to get help expressing your opinions or insight, but there is no value in sharing random chatbot responses here. I say “random” because there’s no way of knowing what personalization or memory informed it, and although in this case, we did see Paul’s prompt, without the exact prompt, there’s even less awareness of what might have led to the response.
  • Everyone must take responsibility for what they post. There’s a chain of responsibility here too—if I post a link to something @mjtsai writes, I have to take responsibility for having done so, but he also implicitly takes responsibility for what he wrote. Even if someone takes responsibility for posting “ChatGPT says…” here, there’s no one to take responsibility for what ChatGPT wrote.

More thoughts will probably occur to me as this discussions continues, but for the moment, let’s avoid copy-paste posts from chatbots.

17 Likes

Thanks Adam, duly noted.

1 Like

sigh. Three posts today with AI-generated responses. I don’t approve.

Use your favorite chatbot for hints and tips if you like, but just dumping the response here is lazy and may be flat-out wrong.

Before sharing anything like, that, do at least a bit of homework to verify the response. Either try it yourself or look up trusted sources (e.g. Apple documentation) to corroborate the response. (And I’d argue that once you’ve done that, take a few more minutes to write your own text and share it as your own research - forget the chatbot altogether.)

It’s effectively the same thing school teachers used to say (at least in the 70’s and 80’s) with respect to using encyclopedias for research reports. Use them to get hints and tips if you must, but then do your own research and present that result as the result of your research. And don’t plagiarize your sources.

9 Likes

I agree with @Shamino. Even though sharing AI results is well intentioned, it doesn’t add value unless it comes with some editorial remarks, like “I have verified that the advice in this quote is an exact solution to the question under discussion.” Even then, a simple link to the answer or editing down to the essentials would be better than sharing a lengthy AI quote.

For some reason, I am reminded of the old "Let Me Google That For You” website.

4 Likes

My current view on using generative AI on message boards is the same as my view on using news stories, videos, Wikipedia articles, and other information sources: if a poster adds something—such as their opinion, personal experience, informed commentary, or fact checking—I don’t have any objections. But I don’t regard content that is simply cut-and-pasted into a post as being too different from what content farms do.

I’d say the broader issue here is balancing people’s desire to help with keeping TBT interesting and insightful.

3 Likes

I think this is important. Chatbot responses can be incredibly helpful (just like human-generated posts on the Internet), but they can also be completely wrong (just like human-generated posts on the Internet).

It gets back to what I was saying about taking responsibility for what you post. There is a significant difference between posting a chatbot response and Let Me Google That For You. Even though Google results are often somewhat personalized, the variability between uses will be relatively small, whereas the variability between chatbot prompts could be very large.

So yes, I don’t want to make a blanket ban on posting chatbot responses, but if you’re going to do that, you should have personal knowledge that the information you’re sharing is correct, at least in your situation and to the best of your knowledge.

7 Likes

Adam, I think allowing AI responses in any form is a cancer that will undermine this fine board. I come here for the smart, thoughtful, and connected people. If one them uses an AI to inform their human-written opinion, great, but a paste-bomb from a chatbot provides no personal touch or credibility and that is the opposite of you’ve fostered.

5 Likes

I take your point, but how does Adam police such a ban? There’s no foolproof way of telling and I’d rather not have accusations flying around the board. That would definitely hurt the community spirit.

3 Likes

One could consider @ace as a benevolent dictator… he would have the ultimate decision about offending AI-generated content if he decided it was necessary. (Much like what Linus does as the ultimate arbitrator about what goes into the Linux kernel).

I have seen the “slop” that happens on other forums I participate in (and several where I moderate) where people have posted unattributed AI generated content. It’s obvious. Cursory examination shows “advice” that’s either irrelevant to the topic or downright inaccurate. Perhaps it’s done to gain participation “points”?

Thankfully this forum has enough experienced participants that know what the “right thing to do” is. That’s what makes participation here enjoyable.

2 Likes

The fuzziness of it all is why I don’t want to establish a blanket ban. Even with pasting in text, there’s a difference between a “paste bomb” (which isn’t a term I’ve heard much before, but describes the result perfectly) and pasting a short set of instructions or the like instead of retyping them. Plus, I think the people who are doing it are doing so with positive intent—they really want to help—so I don’t want to come down too hard on anyone.

So yes, I’ll be keeping an eye out and nudging people who cross whatever line I see.

13 Likes

Another thing to consider is that TidBITS Talk seems to be an increasingly popular source for AI-generated responses. Allowing such responses here would accelerate the “snake eating its tail” cycle that I think will be a serious issue for AI in the future, perhaps even its downfall. I haven’t thought about this deeply, so maybe there are good counter-arguments, but as AI-generated content becomes more and more of the input on which AI-generated content is based, it may be headed for something akin to the heat death of the universe.

2 Likes

I did take on @Shamino’s suspicions and took one for the TidBITS community. Posing a prompt crafted as the opposite of what the OP queried got me exactly the response I “wanted.”

I have noticed how clean and crisp Apple icons look on my various devices, especially in the current xOS26 software edition. They seem uniform and easy to read, even if they are occasionally a bit abstract. It is clear that there is a group at Apple determined to pull design aesthetics out of the skeuomorphic swamp that was advocated by the late Steve Jobs. What do you see as the next steps in this design evolution?

I won’t post the ChatGPT response, of course. Suffice to say that it was just as clearly outlined and enthusiastic about the future of Apple icon design (which is “post-post-skeuomorphic” in the chatbot’s view) as the original chat response was scornful and even scathing.

I think the OP meant well, but in defining the current state of AI chatbots, I’d steer clear of asking their opinion. They are good at summarizing a document, with checks, and they are fantastic at telling you what you already believe.

(EDIT: I followed up by telling ChatGPT I disagreed with its analysis and found the abstract icons led to many mistakes on my part. It completely changed course to conform to my position.)

9 Likes

It’s a known phenomenon called “model collapse.” And there is already so, so much AI slop on the Internet (which is being fed back into AI models) that the minuscule amount here won’t make a difference.

3 Likes

And I’ll note once again that if you have any inkling that the answer is biased one way or another, follow up with “Confirm with a search” to get it to look for actual Web pages.

3 Likes