- Ask for a list of books/journal articles on a topic. One-third of its recommendations are hallucinated.
- Ask for the date a photo was first published. It gives me a date 12 years later than an earlier date I find with a little more research on my own.
- Ask it for some links where I can find a resource. Of the 5 links it gives me, all give 404 errors to pages that the Wayback Machine has never archived.
In terms of historical research, I've decided that AI is basically an industrial document-forging tool, although its intent can't be termed malicious.
The quote in this article regarding AI usage seems appropriate:
Because of this mirroring effect, AI is a machine for confirmation bias, and it "learns" how to confirm your biases with more and more fakery.It commonly asks me questions, adopts my own wording, and gives it back to me. This makes it seem more agreeable and complementary. It’s excellent for augmented intelligence. As it adapts to your patterns, it is more able to anticipate your needs. But it makes NPCs feel smart. Not because they are. Because it’s a mirror on every level.
If someone posts AI output without sharing the input and the full series of prompts, we're probably just reading the most bias-confirming output. Then anyone motivated has to go check all the quotes and citations for forgery.
AI can be a useful tool, but arguing second-hand with a random output is a waste of time.