context
¶
Full name: tenets.models.context
context¶
Context models for prompt processing and result handling.
This module defines the data structures for managing context throughout the distillation and instillation process.
Classes¶
PromptContextdataclass
¶
PromptContext(text: str, original: Optional[str] = None, keywords: list[str] = list(), task_type: str = 'general', intent: str = 'understand', entities: list[dict[str, Any]] = list(), file_patterns: list[str] = list(), focus_areas: list[str] = list(), temporal_context: Optional[dict[str, Any]] = None, scope: dict[str, Any] = dict(), external_context: Optional[dict[str, Any]] = None, metadata: dict[str, Any] = dict(), confidence_scores: dict[str, float] = dict(), session_id: Optional[str] = None, timestamp: datetime = datetime.now(), include_tests: bool = False)
Context extracted from user prompt.
Contains all information parsed from the prompt to guide file selection and ranking. This is the primary data structure that flows through the system after prompt parsing.
ATTRIBUTE | DESCRIPTION |
---|---|
text | The processed prompt text (cleaned and normalized) TYPE: |
original | Original input (may be URL or raw text) |
keywords | Extracted keywords for searching |
task_type | Type of task detected TYPE: |
intent | User intent classification TYPE: |
entities | Named entities found (classes, functions, modules) |
file_patterns | File patterns to match (.py, test_, etc) |
focus_areas | Areas to focus on (auth, api, database, etc) |
temporal_context | Time-related context (recent, yesterday, etc) |
scope | Scope indicators (modules, directories, exclusions) |
external_context | Context from external sources (GitHub, JIRA) |
metadata | Additional metadata for processing |
confidence_scores | Confidence scores for various extractions |
session_id | Associated session if any |
timestamp | When context was created TYPE: |
Functions¶
add_keyword¶
Add a keyword with confidence score.
Source code in tenets/models/context.py
add_entity¶
add_focus_area¶
merge_with¶
Merge this context with another.
Source code in tenets/models/context.py
def merge_with(self, other: "PromptContext") -> "PromptContext":
"""Merge this context with another."""
# Merge keywords
for kw in other.keywords:
self.add_keyword(kw)
# Merge entities
self.entities.extend(other.entities)
# Merge file patterns
self.file_patterns.extend(
[fp for fp in other.file_patterns if fp not in self.file_patterns]
)
# Merge focus areas
for area in other.focus_areas:
self.add_focus_area(area)
# Merge metadata
self.metadata.update(other.metadata)
return self
to_dict¶
Convert to dictionary representation.
Source code in tenets/models/context.py
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary representation."""
return {
"text": self.text,
"original": self.original,
"keywords": self.keywords,
"task_type": self.task_type,
"intent": self.intent,
"entities": self.entities,
"file_patterns": self.file_patterns,
"focus_areas": self.focus_areas,
"temporal_context": self.temporal_context,
"scope": self.scope,
"external_context": self.external_context,
"metadata": self.metadata,
"confidence_scores": self.confidence_scores,
"session_id": self.session_id,
"timestamp": self.timestamp.isoformat(),
}
from_dictclassmethod
¶
Create PromptContext from dictionary.
Source code in tenets/models/context.py
get_hash¶
Compute a deterministic cache key for this prompt context.
The hash incorporates the normalized prompt text, task type, and the ordered list of unique keywords. MD5 is chosen (with usedforsecurity=False
) for speed; collision risk is acceptable for internal memoization.
RETURNS | DESCRIPTION |
---|---|
str | Hex digest suitable for use as an internal cache key. TYPE: |
Source code in tenets/models/context.py
def get_hash(self) -> str:
"""Compute a deterministic cache key for this prompt context.
The hash incorporates the normalized prompt text, task type, and the
ordered list of unique keywords. MD5 is chosen (with
``usedforsecurity=False``) for speed; collision risk is acceptable for
internal memoization.
Returns:
str: Hex digest suitable for use as an internal cache key.
"""
key_data = f"{self.text}_{self.task_type}_{sorted(self.keywords)}"
# nosec B324 - MD5 used only for non-security cache key generation
return hashlib.md5(key_data.encode(), usedforsecurity=False).hexdigest() # nosec
ContextResultdataclass
¶
ContextResult(content: Optional[str] = None, context: Optional[str] = None, format: str = 'markdown', token_count: int = 0, files: list[str] = list(), files_included: list[str] = list(), files_summarized: list[str] = list(), metadata: dict[str, Any] = dict(), session_id: Optional[str] = None, timestamp: datetime = datetime.now(), statistics: dict[str, Any] = dict(), prompt_context: Optional[PromptContext] = None, cost_estimate: Optional[dict[str, float]] = None, warnings: list[str] = list(), errors: list[str] = list())
Result of context generation.
Contains the generated context ready for consumption by LLMs or other tools. This is the final output of the distillation process.
ATTRIBUTE | DESCRIPTION |
---|---|
content | The generated context content (preferred alias) |
context | Backward-compatible alias for content |
format | Output format (markdown, xml, json) TYPE: |
token_count | Number of tokens in context TYPE: |
files | List of included file paths (preferred alias) |
files_included | Backward-compatible alias for files |
files_summarized | List of summarized file paths |
metadata | Additional metadata about generation, including: - timing: Dict with duration info (if timing enabled) - duration: float seconds - formatted_duration: Human-readable string (e.g. "2.34s") - start_datetime: ISO format start time - end_datetime: ISO format end time |
session_id | Session this belongs to |
timestamp | When context was generated TYPE: |
statistics | Generation statistics |
prompt_context | Original prompt context TYPE: |
cost_estimate | Estimated cost for LLM usage |
warnings | Any warnings during generation |
errors | Any errors during generation |
Functions¶
add_warning¶
add_error¶
update_statistics¶
to_dict¶
Convert to dictionary representation.
Source code in tenets/models/context.py
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary representation."""
data = {
# Prefer normalized keys expected by tests
"content": self.content,
"format": self.format,
"token_count": self.token_count,
"files": list(self.files),
# Include legacy keys for backward compatibility
"context": self.context,
"files_included": list(self.files_included),
"files_summarized": list(self.files_summarized),
"metadata": self.metadata,
"session_id": self.session_id,
"timestamp": self.timestamp.isoformat(),
"statistics": self.statistics,
"cost_estimate": self.cost_estimate,
"warnings": self.warnings,
"errors": self.errors,
}
if self.prompt_context:
data["prompt_context"] = self.prompt_context.to_dict()
return data
from_dictclassmethod
¶
Create from dictionary.
Source code in tenets/models/context.py
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "ContextResult":
"""Create from dictionary."""
if "timestamp" in data and isinstance(data["timestamp"], str):
data["timestamp"] = datetime.fromisoformat(data["timestamp"])
if "prompt_context" in data and isinstance(data["prompt_context"], dict):
data["prompt_context"] = PromptContext.from_dict(data["prompt_context"])
# Normalize alias keys on load
if "context" in data and "content" not in data:
data["content"] = data["context"]
if "files_included" in data and "files" not in data:
data["files"] = data["files_included"]
return cls(**data)
save_to_file¶
Save context result to file.
Source code in tenets/models/context.py
get_summary¶
Get a summary of the context result.
Source code in tenets/models/context.py
def get_summary(self) -> str:
"""Get a summary of the context result."""
lines = [
"Context Result Summary:",
f" Format: {self.format}",
f" Token Count: {self.token_count:,}",
f" Files Included: {len(self.files_included)}",
f" Files Summarized: {len(self.files_summarized)}",
]
if self.cost_estimate:
lines.append(f" Estimated Cost: ${self.cost_estimate.get('total_cost', 0):.4f}")
if self.warnings:
lines.append(f" Warnings: {len(self.warnings)}")
if self.errors:
lines.append(f" Errors: {len(self.errors)}")
return "\n".join(lines)
SessionContextdataclass
¶
SessionContext(session_id: str, name: str = '', project_root: Optional[Path] = None, shown_files: set[str] = set(), ignored_files: set[str] = set(), context_history: list[ContextResult] = list(), current_focus: list[str] = list(), tenets_applied: list[str] = list(), created_at: datetime = datetime.now(), updated_at: datetime = datetime.now(), metadata: dict[str, Any] = dict(), ai_requests: list[dict[str, Any]] = list(), branch: Optional[str] = None, pinned_files: set[str] = set())
Context for a session.
Maintains state across multiple prompts in a session for incremental context building and state management.
ATTRIBUTE | DESCRIPTION |
---|---|
session_id | Unique session identifier TYPE: |
name | Human-readable session name TYPE: |
project_root | Root path of the project |
shown_files | Files explicitly shown |
ignored_files | Files to ignore |
context_history | History of contexts TYPE: |
current_focus | Current focus areas |
tenets_applied | Tenets applied in session |
created_at | When session was created TYPE: |
updated_at | Last update time TYPE: |
metadata | Session metadata |
ai_requests | History of AI requests |
branch | Git branch if applicable |
Functions¶
add_shown_file¶
add_ignored_file¶
add_context¶
add_ai_request¶
Record an AI request.
Source code in tenets/models/context.py
add_pinned_file¶
Pin a file so it is always considered for future distill operations.
PARAMETER | DESCRIPTION |
---|---|
file_path | Absolute or project-relative path to the file. TYPE: |
Source code in tenets/models/context.py
list_pinned_files¶
get_latest_context¶
should_show_file¶
Check if file should be shown based on session state.
to_dict¶
Convert to dictionary representation.
Source code in tenets/models/context.py
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary representation."""
return {
"session_id": self.session_id,
"name": self.name,
"project_root": str(self.project_root) if self.project_root else None,
"shown_files": list(self.shown_files),
"ignored_files": list(self.ignored_files),
"context_history": [c.to_dict() for c in self.context_history],
"current_focus": self.current_focus,
"tenets_applied": self.tenets_applied,
"created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(),
"metadata": self.metadata,
"ai_requests": self.ai_requests,
"branch": self.branch,
}
from_dictclassmethod
¶
Create from dictionary.
Source code in tenets/models/context.py
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "SessionContext":
"""Create from dictionary."""
if "created_at" in data and isinstance(data["created_at"], str):
data["created_at"] = datetime.fromisoformat(data["created_at"])
if "updated_at" in data and isinstance(data["updated_at"], str):
data["updated_at"] = datetime.fromisoformat(data["updated_at"])
if "shown_files" in data:
data["shown_files"] = set(data["shown_files"])
if "ignored_files" in data:
data["ignored_files"] = set(data["ignored_files"])
if "context_history" in data:
data["context_history"] = [
ContextResult.from_dict(c) if isinstance(c, dict) else c
for c in data["context_history"]
]
if data.get("project_root"):
data["project_root"] = Path(data["project_root"])
return cls(**data)