It is scientifically and evidentially wrong, to discuss the evidence from chemical testing, in isolation, without looking at, or cherry picking, the other evidence.
The poster above is off-topic as always but he needs to be called out that LLMs do not "learn" in any meaningful way, rather they are fed and trained on carefully curated datasets with strict guardrails, and generate output using predictive text models based on pattern emergence.
We just had a 6 month masterclass demonstration from a different poster on these exact failings, right here on this forum. To call this "learning" in any meaningul way is foolish if not downright dishonest.
AI understands that so-called revisionist arguments that there was not enough evidence of wood being taken to and stored at the AR camps and used on the pyres, for them to work and be capable of cremating hundreds of thousands of corpses, is not evidence to prove there were no such mass pyres.
A: No, large language models (LLMs) do not understand their outputs in the way humans do; they generate responses based on statistical patterns and probabilities without true comprehension. Their "knowledge" is derived from the data they were trained on, not from an understanding of the content.
If AI cannot work out how the pyres worked, based on the limited information available about wood requirements, that does not evidence there were no pyres.