tenets
Package¶
Main package for Tenets - Context that feeds your prompts.
Tenets - Context that feeds your prompts.
Tenets is a code intelligence platform that analyzes codebases locally to surface relevant files, track development velocity, and build optimal context for both human understanding and AI pair programming - all without making any LLM API calls.
This package provides:
Example
Basic usage for context extraction:
from tenets import Tenets ten = Tenets() result = ten.distill("implement OAuth2 authentication") print(result.context)
With tenet system:
ten.add_tenet("Always use type hints in Python", priority="high") ten.instill_tenets() result = ten.distill("add user model") # Context now includes tenets
Classes¶
TenetsConfigdataclass
¶
TenetsConfig(config_file: Optional[Path] = None, project_root: Optional[Path] = None, max_tokens: int = 100000, version: str = '0.1.0', debug: bool = False, quiet: bool = False, scanner: ScannerConfig = ScannerConfig(), ranking: RankingConfig = RankingConfig(), summarizer: SummarizerConfig = SummarizerConfig(), tenet: TenetConfig = TenetConfig(), cache: CacheConfig = CacheConfig(), output: OutputConfig = OutputConfig(), git: GitConfig = GitConfig(), llm: LLMConfig = LLMConfig(), nlp: NLPConfig = NLPConfig(), custom: Dict[str, Any] = dict())
Main configuration for the Tenets system with LLM and NLP support.
This is the root configuration object that contains all subsystem configs and global settings. It handles loading from files, environment variables, and provides sensible defaults.
ATTRIBUTE | DESCRIPTION |
---|---|
config_file | Path to configuration file (if any) |
project_root | Root directory of the project |
max_tokens | Default maximum tokens for context TYPE: |
version | Tenets version (for compatibility checking) TYPE: |
debug | Enable debug mode TYPE: |
quiet | Suppress non-essential output TYPE: |
scanner | Scanner subsystem configuration TYPE: |
ranking | Ranking subsystem configuration TYPE: |
summarizer | Summarizer subsystem configuration TYPE: |
tenet | Tenet subsystem configuration TYPE: |
cache | Cache subsystem configuration TYPE: |
output | Output formatting configuration TYPE: |
git | Git integration configuration TYPE: |
llm | LLM integration configuration TYPE: |
nlp | NLP system configuration TYPE: |
custom | Custom user configuration |
Attributes¶
config_fileclass-attribute
instance-attribute
¶
project_rootclass-attribute
instance-attribute
¶
max_tokensclass-attribute
instance-attribute
¶
versionclass-attribute
instance-attribute
¶
debugclass-attribute
instance-attribute
¶
quietclass-attribute
instance-attribute
¶
scannerclass-attribute
instance-attribute
¶
rankingclass-attribute
instance-attribute
¶
summarizerclass-attribute
instance-attribute
¶
tenetclass-attribute
instance-attribute
¶
cacheclass-attribute
instance-attribute
¶
outputclass-attribute
instance-attribute
¶
gitclass-attribute
instance-attribute
¶
llmclass-attribute
instance-attribute
¶
nlpclass-attribute
instance-attribute
¶
customclass-attribute
instance-attribute
¶
exclude_minifiedproperty
writable
¶
Get exclude_minified setting from scanner config.
minified_patternsproperty
writable
¶
Get minified patterns from scanner config.
build_directory_patternsproperty
writable
¶
Get build directory patterns from scanner config.
respect_gitignoreproperty
writable
¶
Whether to respect .gitignore files.
additional_ignore_patternsproperty
writable
¶
Get additional ignore patterns.
auto_instill_tenetsproperty
writable
¶
Whether to automatically instill tenets.
max_tenets_per_contextproperty
writable
¶
Maximum tenets to inject per context.
tenet_injection_configproperty
¶
Get tenet injection configuration.
nlp_embeddings_enabledproperty
writable
¶
Whether NLP embeddings are enabled.
Functions¶
to_dict¶
save¶
Save configuration to file.
PARAMETER | DESCRIPTION |
---|---|
path | Path to save to (uses config_file if not specified) |
RAISES | DESCRIPTION |
---|---|
ValueError | If no path specified and config_file not set |
get_llm_api_key¶
get_llm_model¶
CodeAnalyzer¶
Main code analysis orchestrator.
Coordinates language-specific analyzers and provides a unified interface for analyzing source code files. Handles caching, parallel processing, analyzer selection, and fallback strategies.
ATTRIBUTE | DESCRIPTION |
---|---|
config | TenetsConfig instance for configuration |
logger | Logger instance for logging |
cache | AnalysisCache for caching analysis results |
analyzers | Dictionary mapping file extensions to analyzer instances |
stats | Analysis statistics and metrics |
Initialize the code analyzer.
PARAMETER | DESCRIPTION |
---|---|
config | Tenets configuration object TYPE: |
Attributes¶
configinstance-attribute
¶
loggerinstance-attribute
¶
cacheinstance-attribute
¶
analyzersinstance-attribute
¶
statsinstance-attribute
¶
Functions¶
analyze_file¶
analyze_file(file_path: Path, deep: bool = False, extract_keywords: bool = True, use_cache: bool = True, progress_callback: Optional[Callable] = None) -> FileAnalysis
Analyze a single file.
Performs language-specific analysis on a file, extracting imports, structure, complexity metrics, and other relevant information.
PARAMETER | DESCRIPTION |
---|---|
file_path | Path to the file to analyze TYPE: |
deep | Whether to perform deep analysis (AST parsing, etc.) TYPE: |
extract_keywords | Whether to extract keywords from content TYPE: |
use_cache | Whether to use cached results if available TYPE: |
progress_callback | Optional callback for progress updates |
RETURNS | DESCRIPTION |
---|---|
FileAnalysis | FileAnalysis object with complete analysis results |
RAISES | DESCRIPTION |
---|---|
FileNotFoundError | If file doesn't exist |
PermissionError | If file cannot be read |
analyze_files¶
analyze_files(file_paths: list[Path], deep: bool = False, parallel: bool = True, progress_callback: Optional[Callable] = None) -> list[FileAnalysis]
Analyze multiple files.
PARAMETER | DESCRIPTION |
---|---|
file_paths | List of file paths to analyze |
deep | Whether to perform deep analysis TYPE: |
parallel | Whether to analyze files in parallel TYPE: |
progress_callback | Optional callback for progress updates |
RETURNS | DESCRIPTION |
---|---|
list[FileAnalysis] | List of FileAnalysis objects |
analyze_project¶
analyze_project(project_path: Path, patterns: Optional[list[str]] = None, exclude_patterns: Optional[list[str]] = None, deep: bool = True, parallel: bool = True, progress_callback: Optional[Callable] = None) -> ProjectAnalysis
Analyze an entire project.
PARAMETER | DESCRIPTION |
---|---|
project_path | Path to the project root TYPE: |
patterns | File patterns to include (e.g., ['.py', '.js']) |
exclude_patterns | File patterns to exclude |
deep | Whether to perform deep analysis TYPE: |
parallel | Whether to analyze files in parallel TYPE: |
progress_callback | Optional callback for progress updates |
RETURNS | DESCRIPTION |
---|---|
ProjectAnalysis | ProjectAnalysis object with complete project analysis |
generate_report¶
generate_report(analysis: Union[FileAnalysis, ProjectAnalysis, list[FileAnalysis]], format: str = 'json', output_path: Optional[Path] = None) -> AnalysisReport
Generate an analysis report.
PARAMETER | DESCRIPTION |
---|---|
analysis | Analysis results to report on TYPE: |
format | Report format ('json', 'html', 'markdown', 'csv') TYPE: |
output_path | Optional path to save the report |
RETURNS | DESCRIPTION |
---|---|
AnalysisReport | AnalysisReport object |
Distiller¶
Orchestrates context extraction from codebases.
The Distiller is the main engine that powers the 'distill' command. It coordinates all the components to extract the most relevant context based on a user's prompt.
Initialize the distiller with configuration.
PARAMETER | DESCRIPTION |
---|---|
config | Tenets configuration TYPE: |
Attributes¶
configinstance-attribute
¶
loggerinstance-attribute
¶
scannerinstance-attribute
¶
analyzerinstance-attribute
¶
rankerinstance-attribute
¶
parserinstance-attribute
¶
gitinstance-attribute
¶
aggregatorinstance-attribute
¶
optimizerinstance-attribute
¶
formatterinstance-attribute
¶
Functions¶
distill¶
distill(prompt: str, paths: Optional[Union[str, Path, List[Path]]] = None, *, format: str = 'markdown', model: Optional[str] = None, max_tokens: Optional[int] = None, mode: str = 'balanced', include_git: bool = True, session_name: Optional[str] = None, include_patterns: Optional[List[str]] = None, exclude_patterns: Optional[List[str]] = None, full: bool = False, condense: bool = False, remove_comments: bool = False, pinned_files: Optional[List[Path]] = None, include_tests: Optional[bool] = None, docstring_weight: Optional[float] = None, summarize_imports: bool = True) -> ContextResult
Distill relevant context from codebase based on prompt.
This is the main method that extracts, ranks, and aggregates the most relevant files and information for a given prompt.
PARAMETER | DESCRIPTION |
---|---|
prompt | The user's query or task description TYPE: |
paths | Paths to analyze (default: current directory) |
format | Output format (markdown, xml, json) TYPE: |
model | Target LLM model for token counting |
max_tokens | Maximum tokens for context |
mode | Analysis mode (fast, balanced, thorough) TYPE: |
include_git | Whether to include git context TYPE: |
session_name | Session name for stateful context |
include_patterns | File patterns to include |
exclude_patterns | File patterns to exclude |
RETURNS | DESCRIPTION |
---|---|
ContextResult | ContextResult with the distilled context |
Example
distiller = Distiller(config) result = distiller.distill( ... "implement OAuth2 authentication", ... paths="./src", ... mode="thorough", ... max_tokens=50000 ... ) print(result.context)
Instiller¶
Main orchestrator for tenet instillation with smart injection.
The Instiller manages the entire process of injecting tenets into context, including: - Tracking injection history per session - Analyzing context complexity - Determining optimal injection frequency - Selecting appropriate tenets - Applying injection strategies - Recording metrics and effectiveness
It supports multiple injection modes: - Always: Inject into every context - Periodic: Inject every Nth distillation - Adaptive: Smart injection based on complexity and session - Manual: Only inject when explicitly requested
Initialize the Instiller.
PARAMETER | DESCRIPTION |
---|---|
config | Configuration object TYPE: |
Attributes¶
configinstance-attribute
¶
loggerinstance-attribute
¶
managerinstance-attribute
¶
injectorinstance-attribute
¶
complexity_analyzerinstance-attribute
¶
metrics_trackerinstance-attribute
¶
session_historiesinstance-attribute
¶
system_instruction_injectedinstance-attribute
¶
Functions¶
inject_system_instruction¶
inject_system_instruction(content: str, format: str = 'markdown', session: Optional[str] = None) -> Tuple[str, Dict[str, Any]]
Inject system instruction (system prompt) according to config.
Behavior: - If system instruction is disabled or empty, return unchanged. - If session provided and once-per-session is enabled, inject only on first distill. - If no session, inject on every distill. - Placement controlled by system_instruction_position. - Formatting controlled by system_instruction_format.
Returns modified content and metadata about injection.
instill¶
instill(context: Union[str, ContextResult], session: Optional[str] = None, force: bool = False, strategy: Optional[str] = None, max_tenets: Optional[int] = None, check_frequency: bool = True, inject_system_instruction: Optional[bool] = None) -> Union[str, ContextResult]
Instill tenets into context with smart injection.
PARAMETER | DESCRIPTION |
---|---|
context | Context to inject tenets into TYPE: |
session | Session identifier for tracking |
force | Force injection regardless of frequency settings TYPE: |
strategy | Override injection strategy |
max_tenets | Override maximum tenets |
check_frequency | Whether to check injection frequency TYPE: |
RETURNS | DESCRIPTION |
---|---|
Union[str, ContextResult] | Modified context with tenets injected (if applicable) |
get_session_stats¶
get_all_session_stats¶
analyze_effectiveness¶
export_instillation_history¶
export_instillation_history(output_path: Path, format: str = 'json', session: Optional[str] = None) -> None
Export instillation history to file.
PARAMETER | DESCRIPTION |
---|---|
output_path | Path to output file TYPE: |
format | Export format (json or csv) TYPE: |
session | Optional session filter |
RAISES | DESCRIPTION |
---|---|
ValueError | If format is not supported |
reset_session_history¶
TenetManager¶
Manages tenets throughout their lifecycle.
Initialize the tenet manager.
PARAMETER | DESCRIPTION |
---|---|
config | Tenets configuration TYPE: |
Attributes¶
configinstance-attribute
¶
loggerinstance-attribute
¶
storage_pathinstance-attribute
¶
db_pathinstance-attribute
¶
Functions¶
add_tenet¶
add_tenet(content: Union[str, Tenet], priority: Union[str, Priority] = 'medium', category: Optional[Union[str, TenetCategory]] = None, session: Optional[str] = None, author: Optional[str] = None) -> Tenet
Add a new tenet.
PARAMETER | DESCRIPTION |
---|---|
content | The guiding principle text or a Tenet object |
priority | Priority level (low, medium, high, critical) |
category | Category for organization TYPE: |
session | Bind to specific session |
author | Who created the tenet |
RETURNS | DESCRIPTION |
---|---|
Tenet | The created Tenet |
get_tenet¶
list_tenets¶
list_tenets(pending_only: bool = False, instilled_only: bool = False, session: Optional[str] = None, category: Optional[Union[str, TenetCategory]] = None) -> List[Dict[str, Any]]
List tenets with filtering.
PARAMETER | DESCRIPTION |
---|---|
pending_only | Only show pending tenets TYPE: |
instilled_only | Only show instilled tenets TYPE: |
session | Filter by session binding |
category | Filter by category TYPE: |
RETURNS | DESCRIPTION |
---|---|
List[Dict[str, Any]] | List of tenet dictionaries |
get_pending_tenets¶
remove_tenet¶
instill_tenets¶
get_tenets_for_injection¶
export_tenets¶
import_tenets¶
import_tenets(file_path: Union[str, Path], session: Optional[str] = None, override_priority: Optional[Priority] = None) -> int
Import tenets from file.
PARAMETER | DESCRIPTION |
---|---|
file_path | Path to import file |
session | Bind imported tenets to session |
override_priority | Override priority for all imported tenets |
RETURNS | DESCRIPTION |
---|---|
int | Number of tenets imported |
create_collection¶
ContextResultdataclass
¶
ContextResult(content: Optional[str] = None, context: Optional[str] = None, format: str = 'markdown', token_count: int = 0, files: list[str] = list(), files_included: list[str] = list(), files_summarized: list[str] = list(), metadata: dict[str, Any] = dict(), session_id: Optional[str] = None, timestamp: datetime = datetime.now(), statistics: dict[str, Any] = dict(), prompt_context: Optional[PromptContext] = None, cost_estimate: Optional[dict[str, float]] = None, warnings: list[str] = list(), errors: list[str] = list())
Result of context generation.
Contains the generated context ready for consumption by LLMs or other tools. This is the final output of the distillation process.
ATTRIBUTE | DESCRIPTION |
---|---|
content | The generated context content (preferred alias) |
context | Backward-compatible alias for content |
format | Output format (markdown, xml, json) TYPE: |
token_count | Number of tokens in context TYPE: |
files | List of included file paths (preferred alias) |
files_included | Backward-compatible alias for files |
files_summarized | List of summarized file paths |
metadata | Additional metadata about generation, including: - timing: Dict with duration info (if timing enabled) - duration: float seconds - formatted_duration: Human-readable string (e.g. "2.34s") - start_datetime: ISO format start time - end_datetime: ISO format end time |
session_id | Session this belongs to |
timestamp | When context was generated TYPE: |
statistics | Generation statistics |
prompt_context | Original prompt context TYPE: |
cost_estimate | Estimated cost for LLM usage |
warnings | Any warnings during generation |
errors | Any errors during generation |
Attributes¶
contentclass-attribute
instance-attribute
¶
contextclass-attribute
instance-attribute
¶
formatclass-attribute
instance-attribute
¶
token_countclass-attribute
instance-attribute
¶
filesclass-attribute
instance-attribute
¶
files_includedclass-attribute
instance-attribute
¶
files_summarizedclass-attribute
instance-attribute
¶
metadataclass-attribute
instance-attribute
¶
session_idclass-attribute
instance-attribute
¶
timestampclass-attribute
instance-attribute
¶
statisticsclass-attribute
instance-attribute
¶
prompt_contextclass-attribute
instance-attribute
¶
cost_estimateclass-attribute
instance-attribute
¶
warningsclass-attribute
instance-attribute
¶
errorsclass-attribute
instance-attribute
¶
Functions¶
Priority¶
Tenetdataclass
¶
Tenet(id: str = (lambda: str(uuid.uuid4()))(), content: str = '', priority: Priority = Priority.MEDIUM, category: Optional[TenetCategory] = None, status: TenetStatus = TenetStatus.PENDING, created_at: datetime = datetime.now(), instilled_at: Optional[datetime] = None, updated_at: datetime = datetime.now(), session_bindings: list[str] = list(), author: Optional[str] = None, metrics: TenetMetrics = TenetMetrics(), injection_strategy: InjectionStrategy = InjectionStrategy(), metadata: dict[str, Any] = dict())
A guiding principle for code development.
Tenets are persistent instructions that guide AI interactions to maintain consistency across multiple prompts and sessions.
ATTRIBUTE | DESCRIPTION |
---|---|
id | Unique identifier TYPE: |
content | The principle text TYPE: |
priority | Importance level TYPE: |
category | Classification category TYPE: |
status | Current status (pending, instilled, archived) TYPE: |
created_at | When the tenet was created TYPE: |
instilled_at | When first instilled into context |
updated_at | Last modification time TYPE: |
session_bindings | Sessions this tenet applies to |
author | Who created the tenet |
metrics | Usage and effectiveness metrics TYPE: |
injection_strategy | How this tenet should be injected TYPE: |
metadata | Additional custom data |
Example
tenet = Tenet( ... content="Always use type hints in Python code", ... priority=Priority.HIGH, ... category=TenetCategory.STYLE ... )
Attributes¶
idclass-attribute
instance-attribute
¶
contentclass-attribute
instance-attribute
¶
priorityclass-attribute
instance-attribute
¶
categoryclass-attribute
instance-attribute
¶
statusclass-attribute
instance-attribute
¶
created_atclass-attribute
instance-attribute
¶
instilled_atclass-attribute
instance-attribute
¶
updated_atclass-attribute
instance-attribute
¶
session_bindingsclass-attribute
instance-attribute
¶
authorclass-attribute
instance-attribute
¶
metricsclass-attribute
instance-attribute
¶
injection_strategyclass-attribute
instance-attribute
¶
metadataclass-attribute
instance-attribute
¶
Functions¶
applies_to_session¶
Check if tenet applies to a session.
should_inject¶
Determine if this tenet should be injected.
format_for_injection¶
Format tenet content for injection into context.
TenetCategory¶
Bases: Enum
Common tenet categories.
Attributes¶
ARCHITECTUREclass-attribute
instance-attribute
¶
SECURITYclass-attribute
instance-attribute
¶
STYLEclass-attribute
instance-attribute
¶
PERFORMANCEclass-attribute
instance-attribute
¶
TESTINGclass-attribute
instance-attribute
¶
DOCUMENTATIONclass-attribute
instance-attribute
¶
API_DESIGNclass-attribute
instance-attribute
¶
ERROR_HANDLINGclass-attribute
instance-attribute
¶
QUALITYclass-attribute
instance-attribute
¶
CUSTOMclass-attribute
instance-attribute
¶
Tenets¶
Main API interface for the Tenets system.
This is the primary class that users interact with to access all Tenets functionality. It coordinates between the various subsystems (distiller, instiller, analyzer, etc.) to provide a unified interface.
The Tenets class can be used both programmatically through Python and via the CLI. It maintains configuration, manages sessions, and orchestrates the various analysis and context generation operations.
ATTRIBUTE | DESCRIPTION |
---|---|
config | TenetsConfig instance containing all configuration |
distiller | Distiller instance for context extraction |
instiller | Instiller instance for tenet management |
tenet_manager | Direct access to TenetManager for advanced operations |
logger | Logger instance for this class |
_session | Current session name if any |
_cache | Internal cache for results |
Example
from tenets import Tenets from pathlib import Path
Initialize with default config¶
ten = Tenets()
Or with custom config¶
from tenets.config import TenetsConfig config = TenetsConfig(max_tokens=150000, ranking_algorithm="thorough") ten = Tenets(config=config)
Extract context (uses default session automatically)¶
result = ten.distill("implement user authentication") print(f"Generated {result.token_count} tokens of context")
Generate HTML report¶
result = ten.distill("review API endpoints", format="html") Path("api-review.html").write_text(result.context)
Add and apply tenets¶
ten.add_tenet("Use dependency injection", priority="high") ten.add_tenet("Follow RESTful conventions", category="architecture") ten.instill_tenets()
Pin critical files for priority inclusion¶
ten.pin_file("src/core/auth.py") ten.pin_folder("src/api/endpoints")
Work with named sessions¶
result = ten.distill( ... "implement OAuth2", ... session_name="oauth-feature", ... mode="thorough" ... )
Initialize Tenets with configuration.
PARAMETER | DESCRIPTION |
---|---|
config | Can be: - TenetsConfig instance - Dictionary of configuration values - Path to configuration file - None (uses default configuration) TYPE: |
RAISES | DESCRIPTION |
---|---|
ValueError | If config format is invalid |
FileNotFoundError | If config file path doesn't exist |
Attributes¶
configinstance-attribute
¶
loggerinstance-attribute
¶
Functions¶
distill¶
distill(prompt: str, files: Optional[Union[str, Path, list[Path]]] = None, *, format: str = 'markdown', model: Optional[str] = None, max_tokens: Optional[int] = None, mode: str = 'balanced', include_git: bool = True, session_name: Optional[str] = None, include_patterns: Optional[list[str]] = None, exclude_patterns: Optional[list[str]] = None, apply_tenets: Optional[bool] = None, full: bool = False, condense: bool = False, remove_comments: bool = False, include_tests: Optional[bool] = None, docstring_weight: Optional[float] = None, summarize_imports: bool = True) -> ContextResult
Distill relevant context from codebase based on prompt.
This is the main method for extracting context. It analyzes your codebase, finds relevant files, ranks them by importance, and aggregates them into an optimized context that fits within token limits.
PARAMETER | DESCRIPTION |
---|---|
prompt | Your query or task description. Can be plain text or a URL to a GitHub issue, JIRA ticket, etc. TYPE: |
files | Paths to analyze. Can be a single path, list of paths, or None to use current directory |
format | Output format - 'markdown', 'xml' (Claude), 'json', or 'html' (interactive report) TYPE: |
model | Target LLM model for token counting (e.g., 'gpt-4o', 'claude-3-opus') |
max_tokens | Maximum tokens for context (overrides model default) |
mode | Analysis mode - 'fast', 'balanced', or 'thorough' TYPE: |
include_git | Whether to include git context (commits, contributors, etc.) TYPE: |
session_name | Session name for stateful context building |
include_patterns | File patterns to include (e.g., ['.py', '.js']) |
exclude_patterns | File patterns to exclude (e.g., ['test_', '.backup']) |
apply_tenets | Whether to apply tenets (None = use config default) |
RETURNS | DESCRIPTION |
---|---|
ContextResult | ContextResult containing the generated context, metadata, and statistics. |
ContextResult | The metadata field includes timing information when available: metadata['timing'] = { 'duration': 2.34, # seconds 'formatted_duration': '2.34s', # Human-readable duration string 'start_datetime': '2024-01-15T10:30:45', 'end_datetime': '2024-01-15T10:30:47' } |
RAISES | DESCRIPTION |
---|---|
ValueError | If prompt is empty or invalid |
FileNotFoundError | If specified files don't exist |
Example
Basic usage (uses default session automatically)¶
result = tenets.distill("implement OAuth2 authentication") print(result.context[:100]) # First 100 chars of context
With specific files and options¶
result = tenets.distill( ... "add caching layer", ... files="./src", ... mode="thorough", ... max_tokens=50000, ... include_patterns=[".py"], ... exclude_patterns=["test_.py"] ... )
Generate HTML report¶
result = tenets.distill( ... "analyze authentication flow", ... format="html" ... ) Path("report.html").write_text(result.context)
With session management¶
result = tenets.distill( ... "implement validation", ... session_name="validation-feature" ... )
From GitHub issue¶
result = tenets.distill("https://github.com/org/repo/issues/123")
Access timing information¶
result = tenets.distill("analyze performance") if 'timing' in result.metadata: ... print(f"Analysis took {result.metadata['timing']['formatted_duration']}") ... # Output: "Analysis took 2.34s"
rank_files¶
rank_files(prompt: str, paths: Optional[Union[str, Path, List[Path]]] = None, *, mode: str = 'balanced', include_patterns: Optional[List[str]] = None, exclude_patterns: Optional[List[str]] = None, include_tests: Optional[bool] = None, exclude_tests: bool = False, explain: bool = False) -> RankResult
Rank files by relevance without generating full context.
This method uses the same sophisticated ranking pipeline as distill() but returns only the ranked files without aggregating content. Perfect for understanding which files are relevant or for automation.
PARAMETER | DESCRIPTION |
---|---|
prompt | Your query or task description TYPE: |
paths | Paths to analyze (default: current directory) |
mode | Analysis mode - 'fast', 'balanced', or 'thorough' TYPE: |
include_patterns | File patterns to include |
exclude_patterns | File patterns to exclude |
include_tests | Whether to include test files |
exclude_tests | Whether to exclude test files TYPE: |
explain | Whether to include ranking factor explanations TYPE: |
RETURNS | DESCRIPTION |
---|---|
RankResult | RankResult containing the ranked files and metadata |
Example
result = ten.rank_files("fix summarizing truncation bug") for file in result.files: ... print(f"{file.path}: {file.relevance_score:.3f}")
add_tenet¶
add_tenet(content: str, priority: Union[str, Priority] = 'medium', category: Optional[Union[str, TenetCategory]] = None, session: Optional[str] = None, author: Optional[str] = None) -> Tenet
Add a new guiding principle (tenet).
Tenets are persistent instructions that get strategically injected into generated context to maintain consistency across AI interactions. They help combat context drift and ensure important principles are followed.
PARAMETER | DESCRIPTION |
---|---|
content | The guiding principle text TYPE: |
priority | Priority level - 'low', 'medium', 'high', or 'critical' |
category | Optional category - 'architecture', 'security', 'style', 'performance', 'testing', 'documentation', etc. TYPE: |
session | Optional session to bind this tenet to |
author | Optional author identifier |
RETURNS | DESCRIPTION |
---|---|
Tenet | The created Tenet object |
instill_tenets¶
Instill pending tenets.
This marks tenets as active and ready to be injected into future contexts. By default, only pending tenets are instilled, but you can force re-instillation of all tenets.
PARAMETER | DESCRIPTION |
---|---|
session | Optional session to instill tenets for |
force | If True, re-instill even already instilled tenets TYPE: |
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any] | Dictionary with instillation results including count and tenets |
add_file_to_session¶
add_folder_to_session¶
add_folder_to_session(folder_path: Union[str, Path], session: Optional[str] = None, include_patterns: Optional[list[str]] = None, exclude_patterns: Optional[list[str]] = None, respect_gitignore: bool = True, recursive: bool = True) -> int
Pin all files in a folder (optionally filtered) into a session.
PARAMETER | DESCRIPTION |
---|---|
folder_path | Directory to scan |
session | Session name |
include_patterns | Include filter |
exclude_patterns | Exclude filter |
respect_gitignore | Respect .gitignore TYPE: |
recursive | Recurse into subdirectories TYPE: |
list_tenets¶
list_tenets(pending_only: bool = False, instilled_only: bool = False, session: Optional[str] = None, category: Optional[Union[str, TenetCategory]] = None) -> list[dict[str, Any]]
List tenets with optional filtering.
PARAMETER | DESCRIPTION |
---|---|
pending_only | Only show pending (not yet instilled) tenets TYPE: |
instilled_only | Only show instilled tenets TYPE: |
session | Filter by session binding |
category | Filter by category TYPE: |
RETURNS | DESCRIPTION |
---|---|
list[dict[str, Any]] | List of tenet dictionaries |
get_tenet¶
remove_tenet¶
get_pending_tenets¶
export_tenets¶
import_tenets¶
examine¶
examine(path: Optional[Union[str, Path]] = None, deep: bool = False, include_git: bool = True, output_metadata: bool = False) -> Any
Examine codebase structure and metrics.
Provides detailed analysis of your code including file counts, language distribution, complexity metrics, and potential issues.
PARAMETER | DESCRIPTION |
---|---|
path | Path to examine (default: current directory) |
deep | Perform deep analysis with AST parsing TYPE: |
include_git | Include git statistics TYPE: |
output_metadata | Include detailed metadata in result TYPE: |
RETURNS | DESCRIPTION |
---|---|
Any | AnalysisResult object with comprehensive codebase analysis |
track_changes¶
track_changes(path: Optional[Union[str, Path]] = None, since: str = '1 week', author: Optional[str] = None, file_pattern: Optional[str] = None) -> Dict[str, Any]
momentum¶
momentum(path: Optional[Union[str, Path]] = None, since: str = 'last-month', team: bool = False, author: Optional[str] = None) -> Dict[str, Any]
estimate_cost¶
Estimate the cost of using generated context with an LLM.
PARAMETER | DESCRIPTION |
---|---|
result | ContextResult from distill() TYPE: |
model | Target model name TYPE: |
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any] | Dictionary with token counts and cost estimates |
set_system_instruction¶
set_system_instruction(instruction: str, enable: bool = True, position: str = 'top', format: str = 'markdown', save: bool = False) -> None
Set the system instruction for AI interactions.
PARAMETER | DESCRIPTION |
---|---|
instruction | The system instruction text TYPE: |
enable | Whether to auto-inject TYPE: |
position | Where to inject ('top', 'after_header', 'before_content') TYPE: |
format | Format type ('markdown', 'xml', 'comment', 'plain') TYPE: |
save | Whether to save to config file TYPE: |
get_system_instruction¶
Functions¶
get_logger¶
Return a configured logger.
Environment variables
- TENETS_LOG_LEVEL: DEBUG|INFO|WARNING|ERROR|CRITICAL
Main Subpackages¶
cli
- Command-line interfacecore
- Core functionality and algorithmsmodels
- Data models and structuresstorage
- Storage backends and persistenceutils
- Utility functions and helpersviz
- Visualization and reporting tools
Direct Modules¶
config
- Configuration management