CompactionConfig¶
Configuration for context compaction.
Quick Example¶
from mamba_agents import CompactionConfig
config = CompactionConfig(
strategy="hybrid",
trigger_threshold_tokens=100000,
target_tokens=80000,
preserve_recent_turns=10,
preserve_system_prompt=True,
)
Configuration Options¶
| Option | Type | Default | Description |
|---|---|---|---|
strategy |
str | "sliding_window" |
Compaction strategy |
trigger_threshold_tokens |
int | 100000 | Token count to trigger |
target_tokens |
int | 80000 | Target after compaction |
preserve_recent_turns |
int | 10 | Recent turns to keep |
preserve_system_prompt |
bool | True | Always keep system |
summarization_model |
str | "same" |
Model for summaries |
Available Strategies¶
sliding_window- Remove oldest messagessummarize_older- LLM summarizationselective_pruning- Remove tool pairsimportance_scoring- LLM scoringhybrid- Combine strategies
API Reference¶
CompactionConfig
¶
Bases: BaseModel
Configuration for context window compaction.
| ATTRIBUTE | DESCRIPTION |
|---|---|
strategy |
Compaction strategy to use.
TYPE:
|
trigger_threshold_tokens |
Token count that triggers compaction.
TYPE:
|
target_tokens |
Target token count after compaction.
TYPE:
|
preserve_recent_turns |
Number of recent turns to always preserve.
TYPE:
|
preserve_system_prompt |
Always preserve the system prompt.
TYPE:
|
summarization_model |
Model to use for summarization (or "same").
TYPE:
|