Discussion about this post

User's avatar
Jake Deatherage's avatar

Excellent and insightful read as always. Curious, to your mind, to what extent can we think of an AI's default system prompts/system instructions as a sort of analogue to this basal "steering system" in biological brains? Seems like we already do conceive of it this way in LLMs, with the weights of the model itself being the thought forms of the cortex (generalizable, emergent, acquired through extensive RL), and the default instructions being what the model is "born" with? I suppose the analogy breaks down when you consider that the system prompts instruct a lot of higher-level concepts like model identity or tonality.

No posts

Ready for more?