Blog Short Summary (from the article by Phil Schmid):
We’ve been experimenting with context in prompt design for a while. But in his latest blog, Phil Schmid takes it further, arguing that context engineering is becoming as important as model architecture or fine-tuning when it comes to getting quality output from LLMs.
-
Context is the new prompt engineering – Instead of tweaking prompts endlessly, context engineering builds structured, reusable inputs that improve consistency and performance.
-
Your context is a system, not a sentence – Schmid outlines a layered system: system messages, user instructions, scratchpad memory, external data, and past interactions all become components of a dynamic input stack.
-
It’s composable and programmable – By treating context as code, engineers can optimise, modularise, and experiment faster—akin to software development for AI input.
-
Tools are emerging fast – Libraries like LangChain and vLLM are enabling more powerful orchestration, memory, and context stacking.
∴
If you want better outputs from LLMs, stop thinking like a prompt engineer.
Start thinking like a context architect.
What system of inputs would best support your team?