Skip to content

Messages

Message querying, filtering, analytics, and export for conversation histories.

Quick Example

from mamba_agents import Agent

agent = Agent("gpt-4o")
agent.run_sync("Hello!")
agent.run_sync("What tools do you have?")

# Access the query interface
query = agent.messages

# Filter messages
user_msgs = query.filter(role="user")
tool_msgs = query.filter(tool_name="read_file")

# Get analytics
stats = query.stats()
print(f"Total: {stats.total_messages} messages, {stats.total_tokens} tokens")

# View timeline
for turn in query.timeline():
    print(f"Turn {turn.index}: {turn.user_content}")

# Export
json_str = query.export(format="json")

Classes

Class Description
MessageQuery Stateless query interface for filtering and analyzing messages
MessageStats Token and message count statistics
ToolCallInfo Summary of a tool's usage across a conversation
Turn A logical conversation turn grouping related messages

Imports

from mamba_agents import MessageQuery, MessageStats, ToolCallInfo, Turn
from mamba_agents.agent.messages import MessageQuery, MessageStats, ToolCallInfo, Turn

API Reference

MessageQuery

MessageQuery

MessageQuery(
    messages: list[dict[str, Any]],
    token_counter: TokenCounter | None = None,
)

Stateless query interface for filtering and slicing message histories.

MessageQuery operates on a provided list of message dicts (OpenAI compatible format) without copying or caching between calls. All filter methods return list[dict[str, Any]].

PARAMETER DESCRIPTION
messages

List of message dicts to query.

TYPE: list[dict[str, Any]]

token_counter

Optional TokenCounter instance for token-aware analytics (used by analytics methods in later phases).

TYPE: TokenCounter | None DEFAULT: None

Example::

query = MessageQuery(messages)
tool_msgs = query.filter(role="tool")
recent = query.last(n=5)
Source code in src/mamba_agents/agent/messages.py
def __init__(
    self,
    messages: list[dict[str, Any]],
    token_counter: TokenCounter | None = None,
) -> None:
    self._messages = messages
    self._token_counter = token_counter

filter

filter(
    *,
    role: str | None = None,
    tool_name: str | None = None,
    content: str | None = None,
    regex: bool = False,
) -> list[dict[str, Any]]

Filter messages by role, tool name, and/or content.

Multiple keyword arguments combine with AND logic. Calling with no arguments returns all messages.

PARAMETER DESCRIPTION
role

Filter by message role (user, assistant, tool, system).

TYPE: str | None DEFAULT: None

tool_name

Filter for messages related to a specific tool. Checks tool_calls[].function.name on assistant messages and the name field on tool result messages.

TYPE: str | None DEFAULT: None

content

Search message content. Case-insensitive plain text match by default; interpreted as a regex pattern when regex is True.

TYPE: str | None DEFAULT: None

regex

When True, treat content as a regular expression pattern.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
list[dict[str, Any]]

List of matching message dicts. Empty list if no matches.

RAISES DESCRIPTION
error

If regex is True and content is not a valid regex.

Source code in src/mamba_agents/agent/messages.py
def filter(
    self,
    *,
    role: str | None = None,
    tool_name: str | None = None,
    content: str | None = None,
    regex: bool = False,
) -> list[dict[str, Any]]:
    """Filter messages by role, tool name, and/or content.

    Multiple keyword arguments combine with AND logic. Calling with
    no arguments returns all messages.

    Args:
        role: Filter by message role (user, assistant, tool, system).
        tool_name: Filter for messages related to a specific tool. Checks
            ``tool_calls[].function.name`` on assistant messages **and**
            the ``name`` field on tool result messages.
        content: Search message content. Case-insensitive plain text match
            by default; interpreted as a regex pattern when *regex* is True.
        regex: When True, treat *content* as a regular expression pattern.

    Returns:
        List of matching message dicts. Empty list if no matches.

    Raises:
        re.error: If *regex* is True and *content* is not a valid regex.
    """
    results = list(self._messages)

    if role is not None:
        results = [msg for msg in results if msg.get("role") == role]

    if tool_name is not None:
        results = [msg for msg in results if self._matches_tool_name(msg, tool_name)]

    if content is not None:
        if regex:
            try:
                pattern = re.compile(content)
            except re.error as exc:
                raise re.error(
                    f"Invalid regex pattern: {content!r} - {exc.msg}",
                    pattern=content,
                ) from exc
            results = [
                msg
                for msg in results
                if "content" in msg
                and msg["content"] is not None
                and pattern.search(msg["content"])
            ]
        else:
            lower_content = content.lower()
            results = [
                msg
                for msg in results
                if "content" in msg
                and msg["content"] is not None
                and lower_content in msg["content"].lower()
            ]

    return results

slice

slice(
    start: int = 0, end: int | None = None
) -> list[dict[str, Any]]

Return messages at indices start through end-1.

Uses standard Python slice semantics so out-of-range indices are handled gracefully.

PARAMETER DESCRIPTION
start

Start index (inclusive). Defaults to 0.

TYPE: int DEFAULT: 0

end

End index (exclusive). Defaults to None (end of list).

TYPE: int | None DEFAULT: None

RETURNS DESCRIPTION
list[dict[str, Any]]

Sliced list of message dicts.

Source code in src/mamba_agents/agent/messages.py
def slice(self, start: int = 0, end: int | None = None) -> list[dict[str, Any]]:
    """Return messages at indices *start* through *end*-1.

    Uses standard Python slice semantics so out-of-range indices
    are handled gracefully.

    Args:
        start: Start index (inclusive). Defaults to 0.
        end: End index (exclusive). Defaults to None (end of list).

    Returns:
        Sliced list of message dicts.
    """
    return self._messages[start:end]

first

first(n: int = 1) -> list[dict[str, Any]]

Return the first n messages.

PARAMETER DESCRIPTION
n

Number of messages to return. Defaults to 1.

TYPE: int DEFAULT: 1

RETURNS DESCRIPTION
list[dict[str, Any]]

List of the first n message dicts (or all if fewer exist).

Source code in src/mamba_agents/agent/messages.py
def first(self, n: int = 1) -> list[dict[str, Any]]:
    """Return the first *n* messages.

    Args:
        n: Number of messages to return. Defaults to 1.

    Returns:
        List of the first *n* message dicts (or all if fewer exist).
    """
    return self._messages[:n]

last

last(n: int = 1) -> list[dict[str, Any]]

Return the last n messages.

PARAMETER DESCRIPTION
n

Number of messages to return. Defaults to 1.

TYPE: int DEFAULT: 1

RETURNS DESCRIPTION
list[dict[str, Any]]

List of the last n message dicts (or all if fewer exist).

Source code in src/mamba_agents/agent/messages.py
def last(self, n: int = 1) -> list[dict[str, Any]]:
    """Return the last *n* messages.

    Args:
        n: Number of messages to return. Defaults to 1.

    Returns:
        List of the last *n* message dicts (or all if fewer exist).
    """
    if n <= 0:
        return []
    return self._messages[-n:]

all

all() -> list[dict[str, Any]]

Return all messages.

Equivalent to get_messages() on the Agent.

RETURNS DESCRIPTION
list[dict[str, Any]]

Complete list of message dicts.

Source code in src/mamba_agents/agent/messages.py
def all(self) -> list[dict[str, Any]]:
    """Return all messages.

    Equivalent to ``get_messages()`` on the Agent.

    Returns:
        Complete list of message dicts.
    """
    return list(self._messages)

stats

stats() -> MessageStats

Compute token and message count statistics.

Counts messages by role and computes token totals using the Agent's configured TokenCounter. Token counts are computed on demand and cached within this single call to avoid redundant computation. When no TokenCounter is available, all token fields default to zero.

RETURNS DESCRIPTION
MessageStats

A MessageStats instance with counts and token statistics.

Source code in src/mamba_agents/agent/messages.py
def stats(self) -> MessageStats:
    """Compute token and message count statistics.

    Counts messages by role and computes token totals using the
    Agent's configured ``TokenCounter``. Token counts are computed
    on demand and cached within this single call to avoid redundant
    computation. When no ``TokenCounter`` is available, all token
    fields default to zero.

    Returns:
        A ``MessageStats`` instance with counts and token statistics.
    """
    if not self._messages:
        return MessageStats()

    # Count messages by role.
    messages_by_role: dict[str, int] = {}
    for msg in self._messages:
        role = msg.get("role", "unknown")
        messages_by_role[role] = messages_by_role.get(role, 0) + 1

    # Compute token counts, caching per-message values within this call.
    tokens_by_role: dict[str, int] = {}
    total_tokens = 0

    if self._token_counter is not None:
        for msg in self._messages:
            role = msg.get("role", "unknown")
            try:
                count = self._token_counter.count_messages([msg])
            except Exception:
                logger.debug(
                    "TokenCounter error for message role=%s, defaulting to 0",
                    role,
                )
                count = 0
            tokens_by_role[role] = tokens_by_role.get(role, 0) + count
            total_tokens += count

    return MessageStats(
        total_messages=len(self._messages),
        messages_by_role=messages_by_role,
        total_tokens=total_tokens,
        tokens_by_role=tokens_by_role,
    )

tool_summary

tool_summary() -> list[ToolCallInfo]

Compute tool call analytics grouped by tool name.

Scans all messages for tool calls (from assistant messages with tool_calls arrays) and tool results (from tool role messages), groups them by tool name, and links calls to their results via tool_call_id.

RETURNS DESCRIPTION
list[ToolCallInfo]

A list of ToolCallInfo instances, one per unique tool name.

list[ToolCallInfo]

Returns an empty list if no tool calls are found.

Source code in src/mamba_agents/agent/messages.py
def tool_summary(self) -> list[ToolCallInfo]:
    """Compute tool call analytics grouped by tool name.

    Scans all messages for tool calls (from assistant messages with
    ``tool_calls`` arrays) and tool results (from tool role messages),
    groups them by tool name, and links calls to their results via
    ``tool_call_id``.

    Returns:
        A list of ``ToolCallInfo`` instances, one per unique tool name.
        Returns an empty list if no tool calls are found.
    """
    if not self._messages:
        return []

    # Build a lookup of tool results by tool_call_id.
    result_by_call_id: dict[str, str] = {}
    matched_call_ids: set[str] = set()

    for msg in self._messages:
        if msg.get("role") == "tool":
            call_id = msg.get("tool_call_id")
            if call_id is not None:
                result_by_call_id[call_id] = msg.get("content", "")

    # Collect tool calls from assistant messages, grouped by tool name.
    # Preserves insertion order so output is deterministic.
    tools: dict[str, ToolCallInfo] = {}

    for msg in self._messages:
        if msg.get("role") != "assistant":
            continue

        raw_tool_calls = msg.get("tool_calls")
        if not isinstance(raw_tool_calls, list):
            continue

        for tc in raw_tool_calls:
            if not isinstance(tc, dict):
                continue

            func = tc.get("function")
            if not isinstance(func, dict):
                continue

            name = func.get("name")
            if not name:
                continue

            call_id = tc.get("id", "")

            # Parse arguments JSON; fall back to raw string on failure.
            raw_args = func.get("arguments", "")
            try:
                parsed_args = json.loads(raw_args) if raw_args else {}
            except (json.JSONDecodeError, TypeError):
                parsed_args = raw_args

            if name not in tools:
                tools[name] = ToolCallInfo(tool_name=name)

            info = tools[name]
            info.call_count += 1
            info.tool_call_ids.append(call_id)
            info.arguments.append(parsed_args if isinstance(parsed_args, dict) else {})

            # Link to the matching tool result if available.
            if call_id and call_id in result_by_call_id:
                info.results.append(result_by_call_id[call_id])
                matched_call_ids.add(call_id)

    # Handle orphaned tool results (results without matching calls).
    for msg in self._messages:
        if msg.get("role") != "tool":
            continue

        call_id = msg.get("tool_call_id", "")
        if call_id in matched_call_ids:
            continue

        name = msg.get("name", "unknown")
        if name not in tools:
            tools[name] = ToolCallInfo(tool_name=name)

        info = tools[name]
        info.call_count += 1
        info.tool_call_ids.append(call_id)
        info.results.append(msg.get("content", ""))

    return list(tools.values())

timeline

timeline() -> list[Turn]

Parse the message list into a structured conversation timeline.

Groups messages into logical turns. Each turn contains a user prompt, the assistant's response, and any tool call/result pairs that occurred during the exchange. System prompts at the start of the conversation are attached as context on the first turn rather than appearing as separate turns.

Turn grouping logic:

  1. Start a new turn on each user message.
  2. Associate the following assistant message with that turn.
  3. If the assistant message has tool_calls, group subsequent tool result messages into the turn's tool_interactions.
  4. If the next message after tool results is another assistant message, it is part of the same turn (tool loop continuation).
  5. Consecutive assistant messages without a preceding user message each get their own turn.
  6. System messages at the start attach to the first turn as context.
RETURNS DESCRIPTION
list[Turn]

A list of Turn objects in conversation order. Returns an

list[Turn]

empty list if there are no messages.

Source code in src/mamba_agents/agent/messages.py
def timeline(self) -> list[Turn]:
    """Parse the message list into a structured conversation timeline.

    Groups messages into logical turns. Each turn contains a user
    prompt, the assistant's response, and any tool call/result pairs
    that occurred during the exchange. System prompts at the start
    of the conversation are attached as context on the first turn
    rather than appearing as separate turns.

    **Turn grouping logic:**

    1. Start a new turn on each user message.
    2. Associate the following assistant message with that turn.
    3. If the assistant message has ``tool_calls``, group subsequent
       tool result messages into the turn's ``tool_interactions``.
    4. If the next message after tool results is another assistant
       message, it is part of the same turn (tool loop continuation).
    5. Consecutive assistant messages without a preceding user message
       each get their own turn.
    6. System messages at the start attach to the first turn as context.

    Returns:
        A list of ``Turn`` objects in conversation order. Returns an
        empty list if there are no messages.
    """
    if not self._messages:
        return []

    turns: list[Turn] = []
    current_turn: Turn | None = None
    system_context: str | None = None
    turn_index = 0
    # Tracks whether the current turn is in a tool loop (assistant
    # called tools and we expect either more tool results or a
    # follow-up assistant message that continues the same turn).
    in_tool_loop = False

    # Build a lookup of tool results by tool_call_id for pairing.
    result_by_call_id: dict[str, dict[str, Any]] = {}
    for msg in self._messages:
        if msg.get("role") == "tool":
            call_id = msg.get("tool_call_id")
            if call_id is not None:
                result_by_call_id[call_id] = msg

    i = 0
    while i < len(self._messages):
        msg = self._messages[i]
        role = msg.get("role", "")

        if role == "system":
            # Collect system context; attach to first turn later.
            content = msg.get("content", "")
            if system_context is None:
                system_context = content
            else:
                system_context += "\n" + (content or "")
            i += 1
            continue

        if role == "user":
            # Start a new turn.
            in_tool_loop = False
            current_turn = Turn(
                index=turn_index,
                user_content=msg.get("content"),
            )
            # Attach accumulated system context to the first turn.
            if system_context is not None and turn_index == 0:
                current_turn.system_context = system_context
                system_context = None
            turn_index += 1
            turns.append(current_turn)
            i += 1
            continue

        if role == "assistant":
            # Decide whether to continue the current turn or start a
            # new one. Continue if: (a) the turn has no assistant
            # content yet, or (b) we are in a tool loop (assistant
            # called tools, tool results came back, next assistant
            # continues the same exchange).
            needs_new_turn = current_turn is None or (
                current_turn.assistant_content is not None and not in_tool_loop
            )
            if needs_new_turn:
                current_turn = Turn(index=turn_index)
                # Attach system context if it hasn't been used yet.
                if system_context is not None and turn_index == 0:
                    current_turn.system_context = system_context
                    system_context = None
                turn_index += 1
                turns.append(current_turn)

            # Reset tool loop flag; it will be set again below if
            # this assistant message also has tool_calls.
            in_tool_loop = False

            content = msg.get("content")
            if content is not None:
                if current_turn.assistant_content is None:
                    current_turn.assistant_content = content
                else:
                    current_turn.assistant_content += "\n" + content

            # Process tool calls if present.
            raw_tool_calls = msg.get("tool_calls")
            has_tool_calls = isinstance(raw_tool_calls, list) and len(raw_tool_calls) > 0
            if has_tool_calls:
                for tc in raw_tool_calls:
                    if not isinstance(tc, dict):
                        continue
                    func = tc.get("function")
                    if not isinstance(func, dict):
                        continue
                    name = func.get("name", "unknown")
                    call_id = tc.get("id", "")
                    raw_args = func.get("arguments", "")
                    try:
                        parsed_args = json.loads(raw_args) if raw_args else {}
                    except (json.JSONDecodeError, TypeError):
                        parsed_args = {}
                    # Find the matching tool result.
                    result_content = ""
                    if call_id and call_id in result_by_call_id:
                        result_content = result_by_call_id[call_id].get("content", "")
                    current_turn.tool_interactions.append(
                        {
                            "tool_name": name,
                            "tool_call_id": call_id,
                            "arguments": parsed_args if isinstance(parsed_args, dict) else {},
                            "result": result_content,
                        }
                    )

            i += 1

            # After processing tool calls, consume any following tool
            # result messages (they were already paired via lookup).
            if has_tool_calls:
                while i < len(self._messages) and self._messages[i].get("role") == "tool":
                    i += 1
                # Mark that we are in a tool loop so the next
                # assistant message continues this turn.
                in_tool_loop = True

            continue

        if role == "tool":
            # Orphaned tool message not preceded by an assistant.
            # Skip it; it was already indexed in the lookup.
            i += 1
            continue

        # Unknown role; skip gracefully.
        i += 1

    # If system context was never attached (e.g., only system messages),
    # create a turn for it.
    if system_context is not None:
        turn = Turn(index=turn_index, system_context=system_context)
        turns.append(turn)

    return turns

export

export(
    format: str = "json",
    messages: list[dict[str, Any]] | None = None,
    **kwargs: Any,
) -> str | list[dict[str, Any]]

Export messages in the specified format.

PARAMETER DESCRIPTION
format

Export format. One of "json", "markdown", "csv", or "dict".

TYPE: str DEFAULT: 'json'

messages

Optional subset of messages to export. When None, all messages held by this query instance are exported.

TYPE: list[dict[str, Any]] | None DEFAULT: None

**kwargs

Format-specific options forwarded to the underlying exporter.

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
str | list[dict[str, Any]]

A JSON, Markdown, or CSV string for string-based formats,

str | list[dict[str, Any]]

or list[dict] for the "dict" format.

RAISES DESCRIPTION
ValueError

If format is not one of the supported formats.

Source code in src/mamba_agents/agent/messages.py
def export(
    self,
    format: str = "json",
    messages: list[dict[str, Any]] | None = None,
    **kwargs: Any,
) -> str | list[dict[str, Any]]:
    """Export messages in the specified format.

    Args:
        format: Export format. One of ``"json"``, ``"markdown"``,
            ``"csv"``, or ``"dict"``.
        messages: Optional subset of messages to export. When *None*,
            all messages held by this query instance are exported.
        **kwargs: Format-specific options forwarded to the
            underlying exporter.

    Returns:
        A JSON, Markdown, or CSV string for string-based formats,
        or ``list[dict]`` for the ``"dict"`` format.

    Raises:
        ValueError: If *format* is not one of the supported formats.
    """
    if format not in self._VALID_FORMATS:
        raise ValueError(
            f"Invalid export format: {format!r}. "
            f"Valid formats: {', '.join(self._VALID_FORMATS)}"
        )

    target_messages = messages if messages is not None else self._messages

    dispatch: dict[str, Any] = {
        "json": self._export_json,
        "markdown": self._export_markdown,
        "csv": self._export_csv,
        "dict": self._export_dict,
    }

    exporter = dispatch.get(format)
    if exporter is None:
        raise NotImplementedError(f"Export format {format!r} is not yet implemented.")
    return exporter(target_messages, **kwargs)

print_stats

print_stats(
    *,
    preset: str = "detailed",
    format: str = "rich",
    console: Console | None = None,
    **options: Any,
) -> str

Render message statistics as a formatted table.

Computes statistics via :meth:stats and delegates to the standalone :func:~mamba_agents.agent.display.print_stats function for rendering. All parameters are forwarded directly.

PARAMETER DESCRIPTION
preset

Named preset ("compact", "detailed", or "verbose").

TYPE: str DEFAULT: 'detailed'

format

Output format ("rich", "plain", or "html").

TYPE: str DEFAULT: 'rich'

console

Optional Rich Console instance. Only used when format is "rich".

TYPE: Console | None DEFAULT: None

**options

Keyword overrides applied to the resolved preset (e.g., show_tokens=False).

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
str

The rendered string.

RAISES DESCRIPTION
ValueError

If preset or format is not recognised.

Example::

agent.messages.print_stats()  # Rich table to terminal
agent.messages.print_stats(format="plain")  # ASCII table
agent.messages.print_stats(preset="compact", show_tokens=True)
Source code in src/mamba_agents/agent/messages.py
def print_stats(
    self,
    *,
    preset: str = "detailed",
    format: str = "rich",
    console: Console | None = None,
    **options: Any,
) -> str:
    """Render message statistics as a formatted table.

    Computes statistics via :meth:`stats` and delegates to the
    standalone :func:`~mamba_agents.agent.display.print_stats` function
    for rendering. All parameters are forwarded directly.

    Args:
        preset: Named preset (``"compact"``, ``"detailed"``, or
            ``"verbose"``).
        format: Output format (``"rich"``, ``"plain"``, or ``"html"``).
        console: Optional Rich ``Console`` instance. Only used when
            *format* is ``"rich"``.
        **options: Keyword overrides applied to the resolved preset
            (e.g., ``show_tokens=False``).

    Returns:
        The rendered string.

    Raises:
        ValueError: If *preset* or *format* is not recognised.

    Example::

        agent.messages.print_stats()  # Rich table to terminal
        agent.messages.print_stats(format="plain")  # ASCII table
        agent.messages.print_stats(preset="compact", show_tokens=True)
    """
    from mamba_agents.agent.display.functions import (
        print_stats as _print_stats,
    )

    stats = self.stats()
    return _print_stats(stats, preset=preset, format=format, console=console, **options)

print_timeline

print_timeline(
    *,
    preset: str = "detailed",
    format: str = "rich",
    console: Console | None = None,
    **options: Any,
) -> str

Render the conversation timeline as a formatted display.

Parses messages into turns via :meth:timeline and delegates to the standalone :func:~mamba_agents.agent.display.print_timeline function for rendering. All parameters are forwarded directly.

PARAMETER DESCRIPTION
preset

Named preset ("compact", "detailed", or "verbose").

TYPE: str DEFAULT: 'detailed'

format

Output format ("rich", "plain", or "html").

TYPE: str DEFAULT: 'rich'

console

Optional Rich Console instance. Only used when format is "rich".

TYPE: Console | None DEFAULT: None

**options

Keyword overrides applied to the resolved preset (e.g., limit=10).

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
str

The rendered string.

RAISES DESCRIPTION
ValueError

If preset or format is not recognised.

Example::

agent.messages.print_timeline()  # Rich panels to terminal
agent.messages.print_timeline(format="plain")  # ASCII timeline
agent.messages.print_timeline(preset="compact", limit=5)
Source code in src/mamba_agents/agent/messages.py
def print_timeline(
    self,
    *,
    preset: str = "detailed",
    format: str = "rich",
    console: Console | None = None,
    **options: Any,
) -> str:
    """Render the conversation timeline as a formatted display.

    Parses messages into turns via :meth:`timeline` and delegates to the
    standalone :func:`~mamba_agents.agent.display.print_timeline` function
    for rendering. All parameters are forwarded directly.

    Args:
        preset: Named preset (``"compact"``, ``"detailed"``, or
            ``"verbose"``).
        format: Output format (``"rich"``, ``"plain"``, or ``"html"``).
        console: Optional Rich ``Console`` instance. Only used when
            *format* is ``"rich"``.
        **options: Keyword overrides applied to the resolved preset
            (e.g., ``limit=10``).

    Returns:
        The rendered string.

    Raises:
        ValueError: If *preset* or *format* is not recognised.

    Example::

        agent.messages.print_timeline()  # Rich panels to terminal
        agent.messages.print_timeline(format="plain")  # ASCII timeline
        agent.messages.print_timeline(preset="compact", limit=5)
    """
    from mamba_agents.agent.display.functions import (
        print_timeline as _print_timeline,
    )

    turns = self.timeline()
    return _print_timeline(turns, preset=preset, format=format, console=console, **options)

print_tools

print_tools(
    *,
    preset: str = "detailed",
    format: str = "rich",
    console: Console | None = None,
    **options: Any,
) -> str

Render a tool usage summary as a formatted table.

Computes tool call summaries via :meth:tool_summary and delegates to the standalone :func:~mamba_agents.agent.display.print_tools function for rendering. All parameters are forwarded directly.

PARAMETER DESCRIPTION
preset

Named preset ("compact", "detailed", or "verbose").

TYPE: str DEFAULT: 'detailed'

format

Output format ("rich", "plain", or "html").

TYPE: str DEFAULT: 'rich'

console

Optional Rich Console instance. Only used when format is "rich".

TYPE: Console | None DEFAULT: None

**options

Keyword overrides applied to the resolved preset (e.g., show_tool_details=True).

TYPE: Any DEFAULT: {}

RETURNS DESCRIPTION
str

The rendered string.

RAISES DESCRIPTION
ValueError

If preset or format is not recognised.

Example::

agent.messages.print_tools()  # Rich table to terminal
agent.messages.print_tools(format="plain")  # ASCII table
agent.messages.print_tools(preset="verbose", show_tool_details=True)
Source code in src/mamba_agents/agent/messages.py
def print_tools(
    self,
    *,
    preset: str = "detailed",
    format: str = "rich",
    console: Console | None = None,
    **options: Any,
) -> str:
    """Render a tool usage summary as a formatted table.

    Computes tool call summaries via :meth:`tool_summary` and delegates to
    the standalone :func:`~mamba_agents.agent.display.print_tools` function
    for rendering. All parameters are forwarded directly.

    Args:
        preset: Named preset (``"compact"``, ``"detailed"``, or
            ``"verbose"``).
        format: Output format (``"rich"``, ``"plain"``, or ``"html"``).
        console: Optional Rich ``Console`` instance. Only used when
            *format* is ``"rich"``.
        **options: Keyword overrides applied to the resolved preset
            (e.g., ``show_tool_details=True``).

    Returns:
        The rendered string.

    Raises:
        ValueError: If *preset* or *format* is not recognised.

    Example::

        agent.messages.print_tools()  # Rich table to terminal
        agent.messages.print_tools(format="plain")  # ASCII table
        agent.messages.print_tools(preset="verbose", show_tool_details=True)
    """
    from mamba_agents.agent.display.functions import (
        print_tools as _print_tools,
    )

    tools = self.tool_summary()
    return _print_tools(tools, preset=preset, format=format, console=console, **options)

MessageStats

MessageStats dataclass

MessageStats(
    total_messages: int = 0,
    messages_by_role: dict[str, int] = dict(),
    total_tokens: int = 0,
    tokens_by_role: dict[str, int] = dict(),
)

Token and message count statistics for a conversation.

ATTRIBUTE DESCRIPTION
total_messages

Total number of messages in the conversation.

TYPE: int

messages_by_role

Count of messages grouped by role (user, assistant, tool, system).

TYPE: dict[str, int]

total_tokens

Total estimated token count across all messages.

TYPE: int

tokens_by_role

Token counts grouped by role.

TYPE: dict[str, int]

total_messages class-attribute instance-attribute

total_messages: int = 0

messages_by_role class-attribute instance-attribute

messages_by_role: dict[str, int] = field(
    default_factory=dict
)

total_tokens class-attribute instance-attribute

total_tokens: int = 0

tokens_by_role class-attribute instance-attribute

tokens_by_role: dict[str, int] = field(default_factory=dict)

avg_tokens_per_message property

avg_tokens_per_message: float

Average tokens per message.

RETURNS DESCRIPTION
float

The average, or 0.0 if there are no messages.

ToolCallInfo

ToolCallInfo dataclass

ToolCallInfo(
    tool_name: str,
    call_count: int = 0,
    arguments: list[dict[str, Any]] = list(),
    results: list[str] = list(),
    tool_call_ids: list[str] = list(),
)

Summary of a single tool's usage across a conversation.

ATTRIBUTE DESCRIPTION
tool_name

Name of the tool.

TYPE: str

call_count

Number of times the tool was called.

TYPE: int

arguments

List of argument dicts passed to each invocation.

TYPE: list[dict[str, Any]]

results

List of result summary strings from each invocation.

TYPE: list[str]

tool_call_ids

List of tool_call_id strings linking calls to results.

TYPE: list[str]

tool_name instance-attribute

tool_name: str

call_count class-attribute instance-attribute

call_count: int = 0

arguments class-attribute instance-attribute

arguments: list[dict[str, Any]] = field(
    default_factory=list
)

results class-attribute instance-attribute

results: list[str] = field(default_factory=list)

tool_call_ids class-attribute instance-attribute

tool_call_ids: list[str] = field(default_factory=list)

Turn

Turn dataclass

Turn(
    index: int = 0,
    user_content: str | None = None,
    assistant_content: str | None = None,
    tool_interactions: list[dict[str, Any]] = list(),
    system_context: str | None = None,
)

A logical conversation turn grouping related messages.

A turn represents one exchange cycle: a user prompt, the assistant's response, and any tool call/result pairs that occurred.

ATTRIBUTE DESCRIPTION
index

Zero-based position of this turn in the conversation.

TYPE: int

user_content

The user's message content, or None if absent.

TYPE: str | None

assistant_content

The assistant's text response, or None if absent.

TYPE: str | None

tool_interactions

List of dicts, each containing tool call and result pairs.

TYPE: list[dict[str, Any]]

system_context

System prompt content attached to this turn, or None.

TYPE: str | None

index class-attribute instance-attribute

index: int = 0

user_content class-attribute instance-attribute

user_content: str | None = None

assistant_content class-attribute instance-attribute

assistant_content: str | None = None

tool_interactions class-attribute instance-attribute

tool_interactions: list[dict[str, Any]] = field(
    default_factory=list
)

system_context class-attribute instance-attribute

system_context: str | None = None