Skip to content

tenets Package

Main package for Tenets - Context that feeds your prompts.

Tenets - Context that feeds your prompts.

Tenets is a code intelligence platform that analyzes codebases locally to surface relevant files, track development velocity, and build optimal context for both human understanding and AI pair programming - all without making any LLM API calls.

This package provides:

Example

Basic usage for context extraction:

from tenets import Tenets ten = Tenets() result = ten.distill("implement OAuth2 authentication") print(result.context)

With tenet system:

ten.add_tenet("Always use type hints in Python", priority="high") ten.instill_tenets() result = ten.distill("add user model") # Context now includes tenets

Classes

TenetsConfigdataclass

Python
TenetsConfig(config_file: Optional[Path] = None, project_root: Optional[Path] = None, max_tokens: int = 100000, version: str = '0.1.0', debug: bool = False, quiet: bool = False, scanner: ScannerConfig = ScannerConfig(), ranking: RankingConfig = RankingConfig(), summarizer: SummarizerConfig = SummarizerConfig(), tenet: TenetConfig = TenetConfig(), cache: CacheConfig = CacheConfig(), output: OutputConfig = OutputConfig(), git: GitConfig = GitConfig(), llm: LLMConfig = LLMConfig(), nlp: NLPConfig = NLPConfig(), custom: Dict[str, Any] = dict())

Main configuration for the Tenets system with LLM and NLP support.

This is the root configuration object that contains all subsystem configs and global settings. It handles loading from files, environment variables, and provides sensible defaults.

ATTRIBUTEDESCRIPTION
config_file

Path to configuration file (if any)

TYPE:Optional[Path]

project_root

Root directory of the project

TYPE:Optional[Path]

max_tokens

Default maximum tokens for context

TYPE:int

version

Tenets version (for compatibility checking)

TYPE:str

debug

Enable debug mode

TYPE:bool

quiet

Suppress non-essential output

TYPE:bool

scanner

Scanner subsystem configuration

TYPE:ScannerConfig

ranking

Ranking subsystem configuration

TYPE:RankingConfig

summarizer

Summarizer subsystem configuration

TYPE:SummarizerConfig

tenet

Tenet subsystem configuration

TYPE:TenetConfig

cache

Cache subsystem configuration

TYPE:CacheConfig

output

Output formatting configuration

TYPE:OutputConfig

git

Git integration configuration

TYPE:GitConfig

llm

LLM integration configuration

TYPE:LLMConfig

nlp

NLP system configuration

TYPE:NLPConfig

custom

Custom user configuration

TYPE:Dict[str, Any]

Attributes

config_fileclass-attributeinstance-attribute
Python
config_file: Optional[Path] = None
project_rootclass-attributeinstance-attribute
Python
project_root: Optional[Path] = None
max_tokensclass-attributeinstance-attribute
Python
max_tokens: int = 100000
versionclass-attributeinstance-attribute
Python
version: str = '0.1.0'
debugclass-attributeinstance-attribute
Python
debug: bool = False
quietclass-attributeinstance-attribute
Python
quiet: bool = False
scannerclass-attributeinstance-attribute
Python
scanner: ScannerConfig = field(default_factory=ScannerConfig)
rankingclass-attributeinstance-attribute
Python
ranking: RankingConfig = field(default_factory=RankingConfig)
summarizerclass-attributeinstance-attribute
Python
summarizer: SummarizerConfig = field(default_factory=SummarizerConfig)
tenetclass-attributeinstance-attribute
Python
tenet: TenetConfig = field(default_factory=TenetConfig)
cacheclass-attributeinstance-attribute
Python
cache: CacheConfig = field(default_factory=CacheConfig)
outputclass-attributeinstance-attribute
Python
output: OutputConfig = field(default_factory=OutputConfig)
gitclass-attributeinstance-attribute
Python
git: GitConfig = field(default_factory=GitConfig)
llmclass-attributeinstance-attribute
Python
llm: LLMConfig = field(default_factory=LLMConfig)
nlpclass-attributeinstance-attribute
Python
nlp: NLPConfig = field(default_factory=NLPConfig)
customclass-attributeinstance-attribute
Python
custom: Dict[str, Any] = field(default_factory=dict)
exclude_minifiedpropertywritable
Python
exclude_minified: bool

Get exclude_minified setting from scanner config.

minified_patternspropertywritable
Python
minified_patterns: List[str]

Get minified patterns from scanner config.

build_directory_patternspropertywritable
Python
build_directory_patterns: List[str]

Get build directory patterns from scanner config.

cache_dirpropertywritable
Python
cache_dir: Path

Get the cache directory path.

scanner_workersproperty
Python
scanner_workers: int

Get number of scanner workers.

ranking_workersproperty
Python
ranking_workers: int

Get number of ranking workers.

ranking_algorithmproperty
Python
ranking_algorithm: str

Get the ranking algorithm.

summarizer_modeproperty
Python
summarizer_mode: str

Get the default summarizer mode.

summarizer_ratioproperty
Python
summarizer_ratio: float

Get the default summarization target ratio.

respect_gitignorepropertywritable
Python
respect_gitignore: bool

Whether to respect .gitignore files.

Python
follow_symlinks: bool

Whether to follow symbolic links.

additional_ignore_patternspropertywritable
Python
additional_ignore_patterns: List[str]

Get additional ignore patterns.

auto_instill_tenetspropertywritable
Python
auto_instill_tenets: bool

Whether to automatically instill tenets.

max_tenets_per_contextpropertywritable
Python
max_tenets_per_context: int

Maximum tenets to inject per context.

tenet_injection_configproperty
Python
tenet_injection_config: Dict[str, Any]

Get tenet injection configuration.

cache_ttl_dayspropertywritable
Python
cache_ttl_days: int

Cache time-to-live in days.

max_cache_size_mbpropertywritable
Python
max_cache_size_mb: int

Maximum cache size in megabytes.

llm_enabledpropertywritable
Python
llm_enabled: bool

Whether LLM features are enabled.

llm_providerpropertywritable
Python
llm_provider: str

Get the current LLM provider.

nlp_enabledpropertywritable
Python
nlp_enabled: bool

Whether NLP features are enabled.

nlp_embeddings_enabledpropertywritable
Python
nlp_embeddings_enabled: bool

Whether NLP embeddings are enabled.

Functions

to_dict
Python
to_dict() -> Dict[str, Any]

Convert configuration to dictionary.

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary representation of configuration

save
Python
save(path: Optional[Path] = None)

Save configuration to file.

PARAMETERDESCRIPTION
path

Path to save to (uses config_file if not specified)

TYPE:Optional[Path]DEFAULT:None

RAISESDESCRIPTION
ValueError

If no path specified and config_file not set

get_llm_api_key
Python
get_llm_api_key(provider: Optional[str] = None) -> Optional[str]

Get LLM API key for a provider.

PARAMETERDESCRIPTION
provider

Provider name (uses default if not specified)

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
Optional[str]

API key or None

get_llm_model
Python
get_llm_model(task: str = 'default', provider: Optional[str] = None) -> str

Get LLM model for a specific task.

PARAMETERDESCRIPTION
task

Task type

TYPE:strDEFAULT:'default'

provider

Provider name (uses default if not specified)

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
str

Model name

CodeAnalyzer

Python
CodeAnalyzer(config: TenetsConfig)

Main code analysis orchestrator.

Coordinates language-specific analyzers and provides a unified interface for analyzing source code files. Handles caching, parallel processing, analyzer selection, and fallback strategies.

ATTRIBUTEDESCRIPTION
config

TenetsConfig instance for configuration

logger

Logger instance for logging

cache

AnalysisCache for caching analysis results

analyzers

Dictionary mapping file extensions to analyzer instances

stats

Analysis statistics and metrics

Initialize the code analyzer.

PARAMETERDESCRIPTION
config

Tenets configuration object

TYPE:TenetsConfig

Attributes

configinstance-attribute
Python
config = config
loggerinstance-attribute
Python
logger = get_logger(__name__)
cacheinstance-attribute
Python
cache = None
analyzersinstance-attribute
Python
analyzers = _initialize_analyzers()
statsinstance-attribute
Python
stats = {'files_analyzed': 0, 'cache_hits': 0, 'cache_misses': 0, 'errors': 0, 'total_time': 0, 'languages': {}}

Functions

analyze_file
Python
analyze_file(file_path: Path, deep: bool = False, extract_keywords: bool = True, use_cache: bool = True, progress_callback: Optional[Callable] = None) -> FileAnalysis

Analyze a single file.

Performs language-specific analysis on a file, extracting imports, structure, complexity metrics, and other relevant information.

PARAMETERDESCRIPTION
file_path

Path to the file to analyze

TYPE:Path

deep

Whether to perform deep analysis (AST parsing, etc.)

TYPE:boolDEFAULT:False

extract_keywords

Whether to extract keywords from content

TYPE:boolDEFAULT:True

use_cache

Whether to use cached results if available

TYPE:boolDEFAULT:True

progress_callback

Optional callback for progress updates

TYPE:Optional[Callable]DEFAULT:None

RETURNSDESCRIPTION
FileAnalysis

FileAnalysis object with complete analysis results

RAISESDESCRIPTION
FileNotFoundError

If file doesn't exist

PermissionError

If file cannot be read

analyze_files
Python
analyze_files(file_paths: list[Path], deep: bool = False, parallel: bool = True, progress_callback: Optional[Callable] = None) -> list[FileAnalysis]

Analyze multiple files.

PARAMETERDESCRIPTION
file_paths

List of file paths to analyze

TYPE:list[Path]

deep

Whether to perform deep analysis

TYPE:boolDEFAULT:False

parallel

Whether to analyze files in parallel

TYPE:boolDEFAULT:True

progress_callback

Optional callback for progress updates

TYPE:Optional[Callable]DEFAULT:None

RETURNSDESCRIPTION
list[FileAnalysis]

List of FileAnalysis objects

analyze_project
Python
analyze_project(project_path: Path, patterns: Optional[list[str]] = None, exclude_patterns: Optional[list[str]] = None, deep: bool = True, parallel: bool = True, progress_callback: Optional[Callable] = None) -> ProjectAnalysis

Analyze an entire project.

PARAMETERDESCRIPTION
project_path

Path to the project root

TYPE:Path

patterns

File patterns to include (e.g., ['.py', '.js'])

TYPE:Optional[list[str]]DEFAULT:None

exclude_patterns

File patterns to exclude

TYPE:Optional[list[str]]DEFAULT:None

deep

Whether to perform deep analysis

TYPE:boolDEFAULT:True

parallel

Whether to analyze files in parallel

TYPE:boolDEFAULT:True

progress_callback

Optional callback for progress updates

TYPE:Optional[Callable]DEFAULT:None

RETURNSDESCRIPTION
ProjectAnalysis

ProjectAnalysis object with complete project analysis

generate_report
Python
generate_report(analysis: Union[FileAnalysis, ProjectAnalysis, list[FileAnalysis]], format: str = 'json', output_path: Optional[Path] = None) -> AnalysisReport

Generate an analysis report.

PARAMETERDESCRIPTION
analysis

Analysis results to report on

TYPE:Union[FileAnalysis, ProjectAnalysis, list[FileAnalysis]]

format

Report format ('json', 'html', 'markdown', 'csv')

TYPE:strDEFAULT:'json'

output_path

Optional path to save the report

TYPE:Optional[Path]DEFAULT:None

RETURNSDESCRIPTION
AnalysisReport

AnalysisReport object

shutdown
Python
shutdown()

Shutdown the analyzer and clean up resources.

Distiller

Python
Distiller(config: TenetsConfig)

Orchestrates context extraction from codebases.

The Distiller is the main engine that powers the 'distill' command. It coordinates all the components to extract the most relevant context based on a user's prompt.

Initialize the distiller with configuration.

PARAMETERDESCRIPTION
config

Tenets configuration

TYPE:TenetsConfig

Attributes

configinstance-attribute
Python
config = config
loggerinstance-attribute
Python
logger = get_logger(__name__)
scannerinstance-attribute
Python
scanner = FileScanner(config)
analyzerinstance-attribute
Python
analyzer = CodeAnalyzer(config)
rankerinstance-attribute
Python
ranker = RelevanceRanker(config)
parserinstance-attribute
Python
parser = PromptParser(config)
gitinstance-attribute
Python
git = GitAnalyzer(config)
aggregatorinstance-attribute
Python
aggregator = ContextAggregator(config)
optimizerinstance-attribute
Python
optimizer = TokenOptimizer(config)
formatterinstance-attribute
Python
formatter = ContextFormatter(config)

Functions

distill
Python
distill(prompt: str, paths: Optional[Union[str, Path, List[Path]]] = None, *, format: str = 'markdown', model: Optional[str] = None, max_tokens: Optional[int] = None, mode: str = 'balanced', include_git: bool = True, session_name: Optional[str] = None, include_patterns: Optional[List[str]] = None, exclude_patterns: Optional[List[str]] = None, full: bool = False, condense: bool = False, remove_comments: bool = False, pinned_files: Optional[List[Path]] = None, include_tests: Optional[bool] = None, docstring_weight: Optional[float] = None, summarize_imports: bool = True) -> ContextResult

Distill relevant context from codebase based on prompt.

This is the main method that extracts, ranks, and aggregates the most relevant files and information for a given prompt.

PARAMETERDESCRIPTION
prompt

The user's query or task description

TYPE:str

paths

Paths to analyze (default: current directory)

TYPE:Optional[Union[str, Path, List[Path]]]DEFAULT:None

format

Output format (markdown, xml, json)

TYPE:strDEFAULT:'markdown'

model

Target LLM model for token counting

TYPE:Optional[str]DEFAULT:None

max_tokens

Maximum tokens for context

TYPE:Optional[int]DEFAULT:None

mode

Analysis mode (fast, balanced, thorough)

TYPE:strDEFAULT:'balanced'

include_git

Whether to include git context

TYPE:boolDEFAULT:True

session_name

Session name for stateful context

TYPE:Optional[str]DEFAULT:None

include_patterns

File patterns to include

TYPE:Optional[List[str]]DEFAULT:None

exclude_patterns

File patterns to exclude

TYPE:Optional[List[str]]DEFAULT:None

RETURNSDESCRIPTION
ContextResult

ContextResult with the distilled context

Example

distiller = Distiller(config) result = distiller.distill( ... "implement OAuth2 authentication", ... paths="./src", ... mode="thorough", ... max_tokens=50000 ... ) print(result.context)

Instiller

Python
Instiller(config: TenetsConfig)

Main orchestrator for tenet instillation with smart injection.

The Instiller manages the entire process of injecting tenets into context, including: - Tracking injection history per session - Analyzing context complexity - Determining optimal injection frequency - Selecting appropriate tenets - Applying injection strategies - Recording metrics and effectiveness

It supports multiple injection modes: - Always: Inject into every context - Periodic: Inject every Nth distillation - Adaptive: Smart injection based on complexity and session - Manual: Only inject when explicitly requested

Initialize the Instiller.

PARAMETERDESCRIPTION
config

Configuration object

TYPE:TenetsConfig

Attributes

configinstance-attribute
Python
config = config
loggerinstance-attribute
Python
logger = get_logger(__name__)
managerinstance-attribute
Python
manager = TenetManager(config)
injectorinstance-attribute
Python
injector = TenetInjector(injection_config)
complexity_analyzerinstance-attribute
Python
complexity_analyzer = ComplexityAnalyzer(config)
metrics_trackerinstance-attribute
Python
metrics_tracker = MetricsTracker()
session_historiesinstance-attribute
Python
session_histories: Dict[str, InjectionHistory] = {}
system_instruction_injectedinstance-attribute
Python
system_instruction_injected: Dict[str, bool] = {}

Functions

inject_system_instruction
Python
inject_system_instruction(content: str, format: str = 'markdown', session: Optional[str] = None) -> Tuple[str, Dict[str, Any]]

Inject system instruction (system prompt) according to config.

Behavior: - If system instruction is disabled or empty, return unchanged. - If session provided and once-per-session is enabled, inject only on first distill. - If no session, inject on every distill. - Placement controlled by system_instruction_position. - Formatting controlled by system_instruction_format.

Returns modified content and metadata about injection.

instill
Python
instill(context: Union[str, ContextResult], session: Optional[str] = None, force: bool = False, strategy: Optional[str] = None, max_tenets: Optional[int] = None, check_frequency: bool = True, inject_system_instruction: Optional[bool] = None) -> Union[str, ContextResult]

Instill tenets into context with smart injection.

PARAMETERDESCRIPTION
context

Context to inject tenets into

TYPE:Union[str, ContextResult]

session

Session identifier for tracking

TYPE:Optional[str]DEFAULT:None

force

Force injection regardless of frequency settings

TYPE:boolDEFAULT:False

strategy

Override injection strategy

TYPE:Optional[str]DEFAULT:None

max_tenets

Override maximum tenets

TYPE:Optional[int]DEFAULT:None

check_frequency

Whether to check injection frequency

TYPE:boolDEFAULT:True

RETURNSDESCRIPTION
Union[str, ContextResult]

Modified context with tenets injected (if applicable)

get_session_stats
Python
get_session_stats(session: str) -> Dict[str, Any]

Get statistics for a specific session.

PARAMETERDESCRIPTION
session

Session identifier

TYPE:str

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary of session statistics

get_all_session_stats
Python
get_all_session_stats() -> Dict[str, Dict[str, Any]]

Get statistics for all sessions.

RETURNSDESCRIPTION
Dict[str, Dict[str, Any]]

Dictionary mapping session IDs to stats

analyze_effectiveness
Python
analyze_effectiveness(session: Optional[str] = None) -> Dict[str, Any]

Analyze the effectiveness of tenet instillation.

PARAMETERDESCRIPTION
session

Optional session to analyze

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary with analysis results and recommendations

export_instillation_history
Python
export_instillation_history(output_path: Path, format: str = 'json', session: Optional[str] = None) -> None

Export instillation history to file.

PARAMETERDESCRIPTION
output_path

Path to output file

TYPE:Path

format

Export format (json or csv)

TYPE:strDEFAULT:'json'

session

Optional session filter

TYPE:Optional[str]DEFAULT:None

RAISESDESCRIPTION
ValueError

If format is not supported

reset_session_history
Python
reset_session_history(session: str) -> bool

Reset injection history for a session.

PARAMETERDESCRIPTION
session

Session identifier

TYPE:str

RETURNSDESCRIPTION
bool

True if reset, False if session not found

clear_cache
Python
clear_cache() -> None

Clear the results cache.

TenetManager

Python
TenetManager(config: TenetsConfig)

Manages tenets throughout their lifecycle.

Initialize the tenet manager.

PARAMETERDESCRIPTION
config

Tenets configuration

TYPE:TenetsConfig

Attributes

configinstance-attribute
Python
config = config
loggerinstance-attribute
Python
logger = get_logger(__name__)
storage_pathinstance-attribute
Python
storage_path = Path(cache_dir) / 'tenets'
db_pathinstance-attribute
Python
db_path = storage_path / 'tenets.db'

Functions

add_tenet
Python
add_tenet(content: Union[str, Tenet], priority: Union[str, Priority] = 'medium', category: Optional[Union[str, TenetCategory]] = None, session: Optional[str] = None, author: Optional[str] = None) -> Tenet

Add a new tenet.

PARAMETERDESCRIPTION
content

The guiding principle text or a Tenet object

TYPE:Union[str, Tenet]

priority

Priority level (low, medium, high, critical)

TYPE:Union[str, Priority]DEFAULT:'medium'

category

Category for organization

TYPE:Optional[Union[str, TenetCategory]]DEFAULT:None

session

Bind to specific session

TYPE:Optional[str]DEFAULT:None

author

Who created the tenet

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
Tenet

The created Tenet

get_tenet
Python
get_tenet(tenet_id: str) -> Optional[Tenet]

Get a specific tenet by ID.

PARAMETERDESCRIPTION
tenet_id

Tenet ID (can be partial)

TYPE:str

RETURNSDESCRIPTION
Optional[Tenet]

The Tenet or None if not found

list_tenets
Python
list_tenets(pending_only: bool = False, instilled_only: bool = False, session: Optional[str] = None, category: Optional[Union[str, TenetCategory]] = None) -> List[Dict[str, Any]]

List tenets with filtering.

PARAMETERDESCRIPTION
pending_only

Only show pending tenets

TYPE:boolDEFAULT:False

instilled_only

Only show instilled tenets

TYPE:boolDEFAULT:False

session

Filter by session binding

TYPE:Optional[str]DEFAULT:None

category

Filter by category

TYPE:Optional[Union[str, TenetCategory]]DEFAULT:None

RETURNSDESCRIPTION
List[Dict[str, Any]]

List of tenet dictionaries

get_pending_tenets
Python
get_pending_tenets(session: Optional[str] = None) -> List[Tenet]

Get all pending tenets.

PARAMETERDESCRIPTION
session

Filter by session

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
List[Tenet]

List of pending Tenet objects

remove_tenet
Python
remove_tenet(tenet_id: str) -> bool

Remove a tenet.

PARAMETERDESCRIPTION
tenet_id

Tenet ID (can be partial)

TYPE:str

RETURNSDESCRIPTION
bool

True if removed, False if not found

instill_tenets
Python
instill_tenets(session: Optional[str] = None, force: bool = False) -> Dict[str, Any]

Instill pending tenets.

PARAMETERDESCRIPTION
session

Target session

TYPE:Optional[str]DEFAULT:None

force

Re-instill even if already instilled

TYPE:boolDEFAULT:False

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary with results

get_tenets_for_injection
Python
get_tenets_for_injection(context_length: int, session: Optional[str] = None, max_tenets: int = 5) -> List[Tenet]

Get tenets ready for injection into context.

PARAMETERDESCRIPTION
context_length

Current context length in tokens

TYPE:int

session

Current session

TYPE:Optional[str]DEFAULT:None

max_tenets

Maximum number of tenets to return

TYPE:intDEFAULT:5

RETURNSDESCRIPTION
List[Tenet]

List of tenets to inject

export_tenets
Python
export_tenets(format: str = 'yaml', session: Optional[str] = None, include_archived: bool = False) -> str

Export tenets to YAML or JSON.

PARAMETERDESCRIPTION
format

Export format (yaml or json)

TYPE:strDEFAULT:'yaml'

session

Filter by session

TYPE:Optional[str]DEFAULT:None

include_archived

Include archived tenets

TYPE:boolDEFAULT:False

RETURNSDESCRIPTION
str

Serialized tenets

import_tenets
Python
import_tenets(file_path: Union[str, Path], session: Optional[str] = None, override_priority: Optional[Priority] = None) -> int

Import tenets from file.

PARAMETERDESCRIPTION
file_path

Path to import file

TYPE:Union[str, Path]

session

Bind imported tenets to session

TYPE:Optional[str]DEFAULT:None

override_priority

Override priority for all imported tenets

TYPE:Optional[Priority]DEFAULT:None

RETURNSDESCRIPTION
int

Number of tenets imported

create_collection
Python
create_collection(name: str, description: str = '', tenet_ids: Optional[List[str]] = None) -> TenetCollection

Create a collection of related tenets.

PARAMETERDESCRIPTION
name

Collection name

TYPE:str

description

Collection description

TYPE:strDEFAULT:''

tenet_ids

IDs of tenets to include

TYPE:Optional[List[str]]DEFAULT:None

RETURNSDESCRIPTION
TenetCollection

The created TenetCollection

analyze_tenet_effectiveness
Python
analyze_tenet_effectiveness() -> Dict[str, Any]

Analyze effectiveness of tenets.

RETURNSDESCRIPTION
Dict[str, Any]

Analysis of tenet usage and effectiveness

ContextResultdataclass

Python
ContextResult(content: Optional[str] = None, context: Optional[str] = None, format: str = 'markdown', token_count: int = 0, files: list[str] = list(), files_included: list[str] = list(), files_summarized: list[str] = list(), metadata: dict[str, Any] = dict(), session_id: Optional[str] = None, timestamp: datetime = datetime.now(), statistics: dict[str, Any] = dict(), prompt_context: Optional[PromptContext] = None, cost_estimate: Optional[dict[str, float]] = None, warnings: list[str] = list(), errors: list[str] = list())

Result of context generation.

Contains the generated context ready for consumption by LLMs or other tools. This is the final output of the distillation process.

ATTRIBUTEDESCRIPTION
content

The generated context content (preferred alias)

TYPE:Optional[str]

context

Backward-compatible alias for content

TYPE:Optional[str]

format

Output format (markdown, xml, json)

TYPE:str

token_count

Number of tokens in context

TYPE:int

files

List of included file paths (preferred alias)

TYPE:list[str]

files_included

Backward-compatible alias for files

TYPE:list[str]

files_summarized

List of summarized file paths

TYPE:list[str]

metadata

Additional metadata about generation, including: - timing: Dict with duration info (if timing enabled) - duration: float seconds - formatted_duration: Human-readable string (e.g. "2.34s") - start_datetime: ISO format start time - end_datetime: ISO format end time

TYPE:dict[str, Any]

session_id

Session this belongs to

TYPE:Optional[str]

timestamp

When context was generated

TYPE:datetime

statistics

Generation statistics

TYPE:dict[str, Any]

prompt_context

Original prompt context

TYPE:Optional[PromptContext]

cost_estimate

Estimated cost for LLM usage

TYPE:Optional[dict[str, float]]

warnings

Any warnings during generation

TYPE:list[str]

errors

Any errors during generation

TYPE:list[str]

Attributes

contentclass-attributeinstance-attribute
Python
content: Optional[str] = None
contextclass-attributeinstance-attribute
Python
context: Optional[str] = None
formatclass-attributeinstance-attribute
Python
format: str = 'markdown'
token_countclass-attributeinstance-attribute
Python
token_count: int = 0
filesclass-attributeinstance-attribute
Python
files: list[str] = field(default_factory=list)
files_includedclass-attributeinstance-attribute
Python
files_included: list[str] = field(default_factory=list)
files_summarizedclass-attributeinstance-attribute
Python
files_summarized: list[str] = field(default_factory=list)
metadataclass-attributeinstance-attribute
Python
metadata: dict[str, Any] = field(default_factory=dict)
session_idclass-attributeinstance-attribute
Python
session_id: Optional[str] = None
timestampclass-attributeinstance-attribute
Python
timestamp: datetime = field(default_factory=now)
statisticsclass-attributeinstance-attribute
Python
statistics: dict[str, Any] = field(default_factory=dict)
prompt_contextclass-attributeinstance-attribute
Python
prompt_context: Optional[PromptContext] = None
cost_estimateclass-attributeinstance-attribute
Python
cost_estimate: Optional[dict[str, float]] = None
warningsclass-attributeinstance-attribute
Python
warnings: list[str] = field(default_factory=list)
errorsclass-attributeinstance-attribute
Python
errors: list[str] = field(default_factory=list)

Functions

add_warning
Python
add_warning(warning: str) -> None

Add a warning message.

add_error
Python
add_error(error: str) -> None

Add an error message.

update_statistics
Python
update_statistics(key: str, value: Any) -> None

Update a statistic value.

to_dict
Python
to_dict() -> dict[str, Any]

Convert to dictionary representation.

from_dictclassmethod
Python
from_dict(data: dict[str, Any]) -> ContextResult

Create from dictionary.

save_to_file
Python
save_to_file(path: Union[str, Path]) -> None

Save context result to file.

get_summary
Python
get_summary() -> str

Get a summary of the context result.

Priority

Bases: Enum

Tenet priority levels.

Attributes

LOWclass-attributeinstance-attribute
Python
LOW = 'low'
MEDIUMclass-attributeinstance-attribute
Python
MEDIUM = 'medium'
HIGHclass-attributeinstance-attribute
Python
HIGH = 'high'
CRITICALclass-attributeinstance-attribute
Python
CRITICAL = 'critical'
weightproperty
Python
weight: float

Get numerical weight for priority.

Tenetdataclass

Python
Tenet(id: str = (lambda: str(uuid.uuid4()))(), content: str = '', priority: Priority = Priority.MEDIUM, category: Optional[TenetCategory] = None, status: TenetStatus = TenetStatus.PENDING, created_at: datetime = datetime.now(), instilled_at: Optional[datetime] = None, updated_at: datetime = datetime.now(), session_bindings: list[str] = list(), author: Optional[str] = None, metrics: TenetMetrics = TenetMetrics(), injection_strategy: InjectionStrategy = InjectionStrategy(), metadata: dict[str, Any] = dict())

A guiding principle for code development.

Tenets are persistent instructions that guide AI interactions to maintain consistency across multiple prompts and sessions.

ATTRIBUTEDESCRIPTION
id

Unique identifier

TYPE:str

content

The principle text

TYPE:str

priority

Importance level

TYPE:Priority

category

Classification category

TYPE:Optional[TenetCategory]

status

Current status (pending, instilled, archived)

TYPE:TenetStatus

created_at

When the tenet was created

TYPE:datetime

instilled_at

When first instilled into context

TYPE:Optional[datetime]

updated_at

Last modification time

TYPE:datetime

session_bindings

Sessions this tenet applies to

TYPE:list[str]

author

Who created the tenet

TYPE:Optional[str]

metrics

Usage and effectiveness metrics

TYPE:TenetMetrics

injection_strategy

How this tenet should be injected

TYPE:InjectionStrategy

metadata

Additional custom data

TYPE:dict[str, Any]

Example

tenet = Tenet( ... content="Always use type hints in Python code", ... priority=Priority.HIGH, ... category=TenetCategory.STYLE ... )

Attributes

idclass-attributeinstance-attribute
Python
id: str = field(default_factory=lambda: str(uuid4()))
contentclass-attributeinstance-attribute
Python
content: str = ''
priorityclass-attributeinstance-attribute
Python
priority: Priority = MEDIUM
categoryclass-attributeinstance-attribute
Python
category: Optional[TenetCategory] = None
statusclass-attributeinstance-attribute
Python
status: TenetStatus = PENDING
created_atclass-attributeinstance-attribute
Python
created_at: datetime = field(default_factory=now)
instilled_atclass-attributeinstance-attribute
Python
instilled_at: Optional[datetime] = None
updated_atclass-attributeinstance-attribute
Python
updated_at: datetime = field(default_factory=now)
session_bindingsclass-attributeinstance-attribute
Python
session_bindings: list[str] = field(default_factory=list)
authorclass-attributeinstance-attribute
Python
author: Optional[str] = None
metricsclass-attributeinstance-attribute
Python
metrics: TenetMetrics = field(default_factory=TenetMetrics)
injection_strategyclass-attributeinstance-attribute
Python
injection_strategy: InjectionStrategy = field(default_factory=InjectionStrategy)
metadataclass-attributeinstance-attribute
Python
metadata: dict[str, Any] = field(default_factory=dict)

Functions

instill
Python
instill() -> None

Mark tenet as instilled.

archive
Python
archive() -> None

Archive this tenet.

bind_to_session
Python
bind_to_session(session_id: str) -> None

Bind tenet to a specific session.

unbind_from_session
Python
unbind_from_session(session_id: str) -> None

Remove session binding.

applies_to_session
Python
applies_to_session(session_id: Optional[str]) -> bool

Check if tenet applies to a session.

should_inject
Python
should_inject(context_length: int, already_injected: int) -> bool

Determine if this tenet should be injected.

format_for_injection
Python
format_for_injection() -> str

Format tenet content for injection into context.

to_dict
Python
to_dict() -> dict[str, Any]

Convert to dictionary representation.

from_dictclassmethod
Python
from_dict(data: dict[str, Any]) -> Tenet

Create Tenet from dictionary.

TenetCategory

Bases: Enum

Common tenet categories.

Attributes

ARCHITECTUREclass-attributeinstance-attribute
Python
ARCHITECTURE = 'architecture'
SECURITYclass-attributeinstance-attribute
Python
SECURITY = 'security'
STYLEclass-attributeinstance-attribute
Python
STYLE = 'style'
PERFORMANCEclass-attributeinstance-attribute
Python
PERFORMANCE = 'performance'
TESTINGclass-attributeinstance-attribute
Python
TESTING = 'testing'
DOCUMENTATIONclass-attributeinstance-attribute
Python
DOCUMENTATION = 'documentation'
API_DESIGNclass-attributeinstance-attribute
Python
API_DESIGN = 'api_design'
ERROR_HANDLINGclass-attributeinstance-attribute
Python
ERROR_HANDLING = 'error_handling'
QUALITYclass-attributeinstance-attribute
Python
QUALITY = 'quality'
CUSTOMclass-attributeinstance-attribute
Python
CUSTOM = 'custom'

Tenets

Python
Tenets(config: Optional[Union[TenetsConfig, dict[str, Any], Path]] = None)

Main API interface for the Tenets system.

This is the primary class that users interact with to access all Tenets functionality. It coordinates between the various subsystems (distiller, instiller, analyzer, etc.) to provide a unified interface.

The Tenets class can be used both programmatically through Python and via the CLI. It maintains configuration, manages sessions, and orchestrates the various analysis and context generation operations.

ATTRIBUTEDESCRIPTION
config

TenetsConfig instance containing all configuration

distiller

Distiller instance for context extraction

instiller

Instiller instance for tenet management

tenet_manager

Direct access to TenetManager for advanced operations

logger

Logger instance for this class

_session

Current session name if any

_cache

Internal cache for results

Example

from tenets import Tenets from pathlib import Path

Initialize with default config

ten = Tenets()

Or with custom config

from tenets.config import TenetsConfig config = TenetsConfig(max_tokens=150000, ranking_algorithm="thorough") ten = Tenets(config=config)

Extract context (uses default session automatically)

result = ten.distill("implement user authentication") print(f"Generated {result.token_count} tokens of context")

Generate HTML report

result = ten.distill("review API endpoints", format="html") Path("api-review.html").write_text(result.context)

Add and apply tenets

ten.add_tenet("Use dependency injection", priority="high") ten.add_tenet("Follow RESTful conventions", category="architecture") ten.instill_tenets()

Pin critical files for priority inclusion

ten.pin_file("src/core/auth.py") ten.pin_folder("src/api/endpoints")

Work with named sessions

result = ten.distill( ... "implement OAuth2", ... session_name="oauth-feature", ... mode="thorough" ... )

Initialize Tenets with configuration.

PARAMETERDESCRIPTION
config

Can be: - TenetsConfig instance - Dictionary of configuration values - Path to configuration file - None (uses default configuration)

TYPE:Optional[Union[TenetsConfig, dict[str, Any], Path]]DEFAULT:None

RAISESDESCRIPTION
ValueError

If config format is invalid

FileNotFoundError

If config file path doesn't exist

Attributes

configinstance-attribute
Python
config = TenetsConfig()
loggerinstance-attribute
Python
logger = get_logger(__name__)
distillerproperty
Python
distiller

Lazy load distiller when needed.

instillerproperty
Python
instiller

Lazy load instiller when needed.

tenet_managerproperty
Python
tenet_manager

Lazy load tenet manager when needed.

Functions

distill
Python
distill(prompt: str, files: Optional[Union[str, Path, list[Path]]] = None, *, format: str = 'markdown', model: Optional[str] = None, max_tokens: Optional[int] = None, mode: str = 'balanced', include_git: bool = True, session_name: Optional[str] = None, include_patterns: Optional[list[str]] = None, exclude_patterns: Optional[list[str]] = None, apply_tenets: Optional[bool] = None, full: bool = False, condense: bool = False, remove_comments: bool = False, include_tests: Optional[bool] = None, docstring_weight: Optional[float] = None, summarize_imports: bool = True) -> ContextResult

Distill relevant context from codebase based on prompt.

This is the main method for extracting context. It analyzes your codebase, finds relevant files, ranks them by importance, and aggregates them into an optimized context that fits within token limits.

PARAMETERDESCRIPTION
prompt

Your query or task description. Can be plain text or a URL to a GitHub issue, JIRA ticket, etc.

TYPE:str

files

Paths to analyze. Can be a single path, list of paths, or None to use current directory

TYPE:Optional[Union[str, Path, list[Path]]]DEFAULT:None

format

Output format - 'markdown', 'xml' (Claude), 'json', or 'html' (interactive report)

TYPE:strDEFAULT:'markdown'

model

Target LLM model for token counting (e.g., 'gpt-4o', 'claude-3-opus')

TYPE:Optional[str]DEFAULT:None

max_tokens

Maximum tokens for context (overrides model default)

TYPE:Optional[int]DEFAULT:None

mode

Analysis mode - 'fast', 'balanced', or 'thorough'

TYPE:strDEFAULT:'balanced'

include_git

Whether to include git context (commits, contributors, etc.)

TYPE:boolDEFAULT:True

session_name

Session name for stateful context building

TYPE:Optional[str]DEFAULT:None

include_patterns

File patterns to include (e.g., ['.py', '.js'])

TYPE:Optional[list[str]]DEFAULT:None

exclude_patterns

File patterns to exclude (e.g., ['test_', '.backup'])

TYPE:Optional[list[str]]DEFAULT:None

apply_tenets

Whether to apply tenets (None = use config default)

TYPE:Optional[bool]DEFAULT:None

RETURNSDESCRIPTION
ContextResult

ContextResult containing the generated context, metadata, and statistics.

ContextResult

The metadata field includes timing information when available: metadata['timing'] = { 'duration': 2.34, # seconds 'formatted_duration': '2.34s', # Human-readable duration string 'start_datetime': '2024-01-15T10:30:45', 'end_datetime': '2024-01-15T10:30:47' }

RAISESDESCRIPTION
ValueError

If prompt is empty or invalid

FileNotFoundError

If specified files don't exist

Example
Basic usage (uses default session automatically)

result = tenets.distill("implement OAuth2 authentication") print(result.context[:100]) # First 100 chars of context

With specific files and options

result = tenets.distill( ... "add caching layer", ... files="./src", ... mode="thorough", ... max_tokens=50000, ... include_patterns=[".py"], ... exclude_patterns=["test_.py"] ... )

Generate HTML report

result = tenets.distill( ... "analyze authentication flow", ... format="html" ... ) Path("report.html").write_text(result.context)

With session management

result = tenets.distill( ... "implement validation", ... session_name="validation-feature" ... )

From GitHub issue

result = tenets.distill("https://github.com/org/repo/issues/123")

Access timing information

result = tenets.distill("analyze performance") if 'timing' in result.metadata: ... print(f"Analysis took {result.metadata['timing']['formatted_duration']}") ... # Output: "Analysis took 2.34s"

rank_files
Python
rank_files(prompt: str, paths: Optional[Union[str, Path, List[Path]]] = None, *, mode: str = 'balanced', include_patterns: Optional[List[str]] = None, exclude_patterns: Optional[List[str]] = None, include_tests: Optional[bool] = None, exclude_tests: bool = False, explain: bool = False) -> RankResult

Rank files by relevance without generating full context.

This method uses the same sophisticated ranking pipeline as distill() but returns only the ranked files without aggregating content. Perfect for understanding which files are relevant or for automation.

PARAMETERDESCRIPTION
prompt

Your query or task description

TYPE:str

paths

Paths to analyze (default: current directory)

TYPE:Optional[Union[str, Path, List[Path]]]DEFAULT:None

mode

Analysis mode - 'fast', 'balanced', or 'thorough'

TYPE:strDEFAULT:'balanced'

include_patterns

File patterns to include

TYPE:Optional[List[str]]DEFAULT:None

exclude_patterns

File patterns to exclude

TYPE:Optional[List[str]]DEFAULT:None

include_tests

Whether to include test files

TYPE:Optional[bool]DEFAULT:None

exclude_tests

Whether to exclude test files

TYPE:boolDEFAULT:False

explain

Whether to include ranking factor explanations

TYPE:boolDEFAULT:False

RETURNSDESCRIPTION
RankResult

RankResult containing the ranked files and metadata

Example

result = ten.rank_files("fix summarizing truncation bug") for file in result.files: ... print(f"{file.path}: {file.relevance_score:.3f}")

add_tenet
Python
add_tenet(content: str, priority: Union[str, Priority] = 'medium', category: Optional[Union[str, TenetCategory]] = None, session: Optional[str] = None, author: Optional[str] = None) -> Tenet

Add a new guiding principle (tenet).

Tenets are persistent instructions that get strategically injected into generated context to maintain consistency across AI interactions. They help combat context drift and ensure important principles are followed.

PARAMETERDESCRIPTION
content

The guiding principle text

TYPE:str

priority

Priority level - 'low', 'medium', 'high', or 'critical'

TYPE:Union[str, Priority]DEFAULT:'medium'

category

Optional category - 'architecture', 'security', 'style', 'performance', 'testing', 'documentation', etc.

TYPE:Optional[Union[str, TenetCategory]]DEFAULT:None

session

Optional session to bind this tenet to

TYPE:Optional[str]DEFAULT:None

author

Optional author identifier

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
Tenet

The created Tenet object

Example
Add a high-priority security tenet

tenet = ten.add_tenet( ... "Always validate and sanitize user input", ... priority="high", ... category="security" ... )

Add a session-specific tenet

ten.add_tenet( ... "Use async/await for all I/O operations", ... session="async-refactor" ... )

instill_tenets
Python
instill_tenets(session: Optional[str] = None, force: bool = False) -> Dict[str, Any]

Instill pending tenets.

This marks tenets as active and ready to be injected into future contexts. By default, only pending tenets are instilled, but you can force re-instillation of all tenets.

PARAMETERDESCRIPTION
session

Optional session to instill tenets for

TYPE:Optional[str]DEFAULT:None

force

If True, re-instill even already instilled tenets

TYPE:boolDEFAULT:False

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary with instillation results including count and tenets

Example
Instill all pending tenets

result = ten.instill_tenets() print(f"Instilled {result['count']} tenets")

Force re-instillation

ten.instill_tenets(force=True)

add_file_to_session
Python
add_file_to_session(file_path: Union[str, Path], session: Optional[str] = None) -> bool

Pin a single file into a session so it is prioritized in future distill calls.

PARAMETERDESCRIPTION
file_path

Path to file

TYPE:Union[str, Path]

session

Optional session name

TYPE:Optional[str]DEFAULT:None

add_folder_to_session
Python
add_folder_to_session(folder_path: Union[str, Path], session: Optional[str] = None, include_patterns: Optional[list[str]] = None, exclude_patterns: Optional[list[str]] = None, respect_gitignore: bool = True, recursive: bool = True) -> int

Pin all files in a folder (optionally filtered) into a session.

PARAMETERDESCRIPTION
folder_path

Directory to scan

TYPE:Union[str, Path]

session

Session name

TYPE:Optional[str]DEFAULT:None

include_patterns

Include filter

TYPE:Optional[list[str]]DEFAULT:None

exclude_patterns

Exclude filter

TYPE:Optional[list[str]]DEFAULT:None

respect_gitignore

Respect .gitignore

TYPE:boolDEFAULT:True

recursive

Recurse into subdirectories

TYPE:boolDEFAULT:True

list_tenets
Python
list_tenets(pending_only: bool = False, instilled_only: bool = False, session: Optional[str] = None, category: Optional[Union[str, TenetCategory]] = None) -> list[dict[str, Any]]

List tenets with optional filtering.

PARAMETERDESCRIPTION
pending_only

Only show pending (not yet instilled) tenets

TYPE:boolDEFAULT:False

instilled_only

Only show instilled tenets

TYPE:boolDEFAULT:False

session

Filter by session binding

TYPE:Optional[str]DEFAULT:None

category

Filter by category

TYPE:Optional[Union[str, TenetCategory]]DEFAULT:None

RETURNSDESCRIPTION
list[dict[str, Any]]

List of tenet dictionaries

Example
List all tenets

all_tenets = ten.list_tenets()

List only pending security tenets

pending_security = ten.list_tenets( ... pending_only=True, ... category="security" ... )

get_tenet
Python
get_tenet(tenet_id: str) -> Optional[Tenet]

Get a specific tenet by ID.

PARAMETERDESCRIPTION
tenet_id

Tenet ID (can be partial)

TYPE:str

RETURNSDESCRIPTION
Optional[Tenet]

The Tenet object or None if not found

remove_tenet
Python
remove_tenet(tenet_id: str) -> bool

Remove (archive) a tenet.

PARAMETERDESCRIPTION
tenet_id

Tenet ID (can be partial)

TYPE:str

RETURNSDESCRIPTION
bool

True if removed, False if not found

get_pending_tenets
Python
get_pending_tenets(session: Optional[str] = None) -> List[Tenet]

Get all pending tenets.

PARAMETERDESCRIPTION
session

Optional session filter

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
List[Tenet]

List of pending Tenet objects

export_tenets
Python
export_tenets(format: str = 'yaml', session: Optional[str] = None) -> str

Export tenets to YAML or JSON.

PARAMETERDESCRIPTION
format

Export format - 'yaml' or 'json'

TYPE:strDEFAULT:'yaml'

session

Optional session filter

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
str

Serialized tenets string

import_tenets
Python
import_tenets(file_path: Union[str, Path], session: Optional[str] = None) -> int

Import tenets from file.

PARAMETERDESCRIPTION
file_path

Path to import file (YAML or JSON)

TYPE:Union[str, Path]

session

Optional session to bind imported tenets to

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
int

Number of tenets imported

examine
Python
examine(path: Optional[Union[str, Path]] = None, deep: bool = False, include_git: bool = True, output_metadata: bool = False) -> Any

Examine codebase structure and metrics.

Provides detailed analysis of your code including file counts, language distribution, complexity metrics, and potential issues.

PARAMETERDESCRIPTION
path

Path to examine (default: current directory)

TYPE:Optional[Union[str, Path]]DEFAULT:None

deep

Perform deep analysis with AST parsing

TYPE:boolDEFAULT:False

include_git

Include git statistics

TYPE:boolDEFAULT:True

output_metadata

Include detailed metadata in result

TYPE:boolDEFAULT:False

RETURNSDESCRIPTION
Any

AnalysisResult object with comprehensive codebase analysis

Example
Basic examination

analysis = ten.examine() print(f"Found {analysis.total_files} files") print(f"Languages: {', '.join(analysis.languages)}")

Deep analysis with git

analysis = ten.examine(deep=True, include_git=True)

track_changes
Python
track_changes(path: Optional[Union[str, Path]] = None, since: str = '1 week', author: Optional[str] = None, file_pattern: Optional[str] = None) -> Dict[str, Any]

Track code changes over time.

PARAMETERDESCRIPTION
path

Repository path (default: current directory)

TYPE:Optional[Union[str, Path]]DEFAULT:None

since

Time period (e.g., '1 week', '3 days', 'yesterday')

TYPE:strDEFAULT:'1 week'

author

Filter by author

TYPE:Optional[str]DEFAULT:None

file_pattern

Filter by file pattern

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary with change information

momentum
Python
momentum(path: Optional[Union[str, Path]] = None, since: str = 'last-month', team: bool = False, author: Optional[str] = None) -> Dict[str, Any]

Track development momentum and velocity.

PARAMETERDESCRIPTION
path

Repository path

TYPE:Optional[Union[str, Path]]DEFAULT:None

since

Time period to analyze

TYPE:strDEFAULT:'last-month'

team

Show team-wide statistics

TYPE:boolDEFAULT:False

author

Show stats for specific author

TYPE:Optional[str]DEFAULT:None

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary with momentum metrics

estimate_cost
Python
estimate_cost(result: ContextResult, model: str) -> Dict[str, Any]

Estimate the cost of using generated context with an LLM.

PARAMETERDESCRIPTION
result

ContextResult from distill()

TYPE:ContextResult

model

Target model name

TYPE:str

RETURNSDESCRIPTION
Dict[str, Any]

Dictionary with token counts and cost estimates

set_system_instruction
Python
set_system_instruction(instruction: str, enable: bool = True, position: str = 'top', format: str = 'markdown', save: bool = False) -> None

Set the system instruction for AI interactions.

PARAMETERDESCRIPTION
instruction

The system instruction text

TYPE:str

enable

Whether to auto-inject

TYPE:boolDEFAULT:True

position

Where to inject ('top', 'after_header', 'before_content')

TYPE:strDEFAULT:'top'

format

Format type ('markdown', 'xml', 'comment', 'plain')

TYPE:strDEFAULT:'markdown'

save

Whether to save to config file

TYPE:boolDEFAULT:False

get_system_instruction
Python
get_system_instruction() -> Optional[str]

Get the current system instruction.

RETURNSDESCRIPTION
Optional[str]

The system instruction text or None

clear_system_instruction
Python
clear_system_instruction(save: bool = False) -> None

Clear the system instruction.

PARAMETERDESCRIPTION
save

Whether to save to config file

TYPE:boolDEFAULT:False

Functions

get_logger

Python
get_logger(name: Optional[str] = None, level: Optional[int] = None) -> logging.Logger

Return a configured logger.

Environment variables
  • TENETS_LOG_LEVEL: DEBUG|INFO|WARNING|ERROR|CRITICAL

Main Subpackages

  • cli - Command-line interface
  • core - Core functionality and algorithms
  • models - Data models and structures
  • storage - Storage backends and persistence
  • utils - Utility functions and helpers
  • viz - Visualization and reporting tools

Direct Modules

  • config - Configuration management