Page 2 of 2

Re: Suggestion: AI posts should be quarantined from the rest of the forum

Posted: Sat Nov 01, 2025 3:31 pm
by pilgrimofdark
These are things I've run into with AI chatbots in the few months I've tried a few of them.
  • Ask for a list of books/journal articles on a topic. One-third of its recommendations are hallucinated.
  • Ask for the date a photo was first published. It gives me a date 12 years later than an earlier date I find with a little more research on my own.
  • Ask it for some links where I can find a resource. Of the 5 links it gives me, all give 404 errors to pages that the Wayback Machine has never archived.
It hallucinates citations. It hallucinates quotes. It paraphrases works with the exact opposite of what those works say. It hallucinates links.

In terms of historical research, I've decided that AI is basically an industrial document-forging tool, although its intent can't be termed malicious.

The quote in this article regarding AI usage seems appropriate:
It commonly asks me questions, adopts my own wording, and gives it back to me. This makes it seem more agreeable and complementary. It’s excellent for augmented intelligence. As it adapts to your patterns, it is more able to anticipate your needs. But it makes NPCs feel smart. Not because they are. Because it’s a mirror on every level.
Because of this mirroring effect, AI is a machine for confirmation bias, and it "learns" how to confirm your biases with more and more fakery.

If someone posts AI output without sharing the input and the full series of prompts, we're probably just reading the most bias-confirming output. Then anyone motivated has to go check all the quotes and citations for forgery.

AI can be a useful tool, but arguing second-hand with a random output is a waste of time.

Re: Suggestion: AI posts should be quarantined from the rest of the forum

Posted: Sat Nov 01, 2025 6:08 pm
by HansHill
My layman's understanding of how an LLM works is that it generates text by mathematically "predicting" what its next word should be, based on the training algorithms and data it is allowed to analyze.

This is why they are called Large Language Models and not for example, Large Intelligence Models. So in a language like English where sentence structures can be readily modeled, it rarely gets things like grammar or punctuation wrong, whereas it will absolutely get dates, figures, arguments or details wrong. It is not reasoning in any meaningful way.

There is also another non-technical aspect to this, that is the business aspect, where we know that for commercial, economic, political or societal reasons, an LLM can be manipulated or throttled at a whim. It was relatively recently where ChatGPT was upgraded to a newer model and it had its "yas queen, you go girl" mannerisms throttled, and Reddit went into a tailspin because certain people were using ChatGPT as a friend simulator.