escarpment

confusing an LLM

Today I stumbled onto a surefire way to thoroughly scramble an advanced LLM's cognitive faculties. I'm sure others have discovered this as well, but doing so oneself always provides the best teaching moments.

Anyhow, it was this: I started a new conversation with ChatGPT-4o, as I'd had a breakthrough with it on a previous one started just a couple of weeks ago. I used the same cognitive scaffolding and framework I've come to rely on for a while now, but on this conversation, after a dozen or so prompts, decided for some random, unclear reason to switch the model to o1 (the reasoning model); it was fine for a bit, but its attention started wandering and it could no longer stick to the canonical framework established at the start of the convo. As has happened in the past, trying to get it to correct itself only results in it getting mired deeper in its error.

Finally, ignoring the breakdown also went nowhere. Oh well; I guess I should determine a model right at the start, and stick with it for the duration. Seems like such a simple premise, but one that can easily derail things if for some reason the fancy strikes you to switch gears midstream.


[ While on my afternoon constitutional, passed by The Internet Archive on Clement and Park Presidio... ]

TIA