Good sensemaking processes iterate. We develop initial theories, note some alternative ones. We then take those theories that we’ve seen and stack up the evidence for one against the other (or others). Even while doing that we keep an eye out for other possible explanations to test. When new explanations stop appearing and we feel that the evidence pattern increasingly favors one idea significantly over another we call it a day.
LLMs are no different. What often is deemed a “wrong” response is often4 merely a first pass at describing the beliefs out there. And the solution is the same: iterate the process.
…
What I’ve found specifically is that pushing it to do a second pass without putting a thumb on the scale almost always leads to a better result. To do this I use what I call “sorting statements” that try to do a variety of things
Mike Caulfield is someone who cares about the veracity of information. The entire post is fascinating and has painted LLM search results in a new way for me.
I now have a Raycast Snippet aiprompt;
which expands to this:
What is the evidence for and against the claim/guidance just stated?
Already I’m seeing much better results.