Safety The discipline of ensuring AI agents operate within intended boundaries, avoid harmful actions, and remain aligned with human values and policies.
— See: Safety
Sandbox An isolated execution environment that limits what actions an agent can take, preventing unintended side effects on the host system.
Scaffolding The external code, prompts, and infrastructure wrapped around a language model to turn it into a functioning agent — distinct from the model itself.
Skills Pattern Organizing agent capabilities as discrete, composable skill modules that can be selected and combined for different tasks.
Streaming Delivering model outputs incrementally as they are generated, rather than waiting for the complete response, enabling real-time feedback and progressive UI updates.
Structured Output Constraining a model's response to follow a specific format — such as JSON, XML, or a schema — ensuring machine-readable and predictable outputs.
System Prompt The initial instruction text provided to a language model that sets its persona, behavior rules, available tools, and task context before any user input.