Skip to content

metrics

Full name: tenets.core.examiner.metrics

metrics

Metrics calculation module for code analysis.

This module provides comprehensive metrics calculation for codebases, including size metrics, complexity aggregations, code quality indicators, and statistical analysis across files and languages.

The MetricsCalculator class processes analyzed files to extract quantitative measurements that help assess code health, maintainability, and quality.

Classes

MetricsReportdataclass

Python
MetricsReport(total_files: int = 0, total_lines: int = 0, total_blank_lines: int = 0, total_comment_lines: int = 0, total_code_lines: int = 0, total_functions: int = 0, total_classes: int = 0, total_imports: int = 0, avg_file_size: float = 0.0, avg_complexity: float = 0.0, max_complexity: float = 0.0, min_complexity: float = float('inf'), complexity_std_dev: float = 0.0, documentation_ratio: float = 0.0, test_coverage: float = 0.0, code_duplication_ratio: float = 0.0, technical_debt_score: float = 0.0, maintainability_index: float = 0.0, languages: Dict[str, Dict[str, Any]] = dict(), file_types: Dict[str, int] = dict(), size_distribution: Dict[str, int] = dict(), complexity_distribution: Dict[str, int] = dict(), largest_files: List[Dict[str, Any]] = list(), most_complex_files: List[Dict[str, Any]] = list(), most_imported_modules: List[Tuple[str, int]] = list())

Comprehensive metrics report for analyzed code.

Aggregates various code metrics to provide quantitative insights into codebase characteristics, including size, complexity, documentation, and quality indicators.

ATTRIBUTEDESCRIPTION
total_files

Total number of files analyzed

TYPE:int

total_lines

Total lines of code across all files

TYPE:int

total_blank_lines

Total blank lines

TYPE:int

total_comment_lines

Total comment lines

TYPE:int

total_code_lines

Total actual code lines (excluding blanks/comments)

TYPE:int

total_functions

Total number of functions/methods

TYPE:int

total_classes

Total number of classes

TYPE:int

total_imports

Total number of import statements

TYPE:int

avg_file_size

Average file size in lines

TYPE:float

avg_complexity

Average cyclomatic complexity

TYPE:float

max_complexity

Maximum cyclomatic complexity found

TYPE:float

min_complexity

Minimum cyclomatic complexity found

TYPE:float

complexity_std_dev

Standard deviation of complexity

TYPE:float

documentation_ratio

Ratio of comment lines to code lines

TYPE:float

test_coverage

Estimated test coverage (if test files found)

TYPE:float

languages

Dictionary of language-specific metrics

TYPE:Dict[str, Dict[str, Any]]

file_types

Distribution of file types

TYPE:Dict[str, int]

size_distribution

File size distribution buckets

TYPE:Dict[str, int]

complexity_distribution

Complexity distribution buckets

TYPE:Dict[str, int]

largest_files

List of largest files by line count

TYPE:List[Dict[str, Any]]

most_complex_files

List of files with highest complexity

TYPE:List[Dict[str, Any]]

most_imported_modules

Most frequently imported modules

TYPE:List[Tuple[str, int]]

code_duplication_ratio

Estimated code duplication ratio

TYPE:float

technical_debt_score

Calculated technical debt score

TYPE:float

maintainability_index

Overall maintainability index

TYPE:float

Attributes
code_to_comment_ratioproperty
Python
code_to_comment_ratio: float

Calculate code to comment ratio.

RETURNSDESCRIPTION
float

Ratio of code lines to comment lines

TYPE:float

avg_file_complexityproperty
Python
avg_file_complexity: float

Calculate average complexity per file.

RETURNSDESCRIPTION
float

Average complexity across all files

TYPE:float

quality_scoreproperty
Python
quality_score: float

Calculate overall code quality score (0-100).

Combines various metrics to produce a single quality indicator.

RETURNSDESCRIPTION
float

Quality score between 0 and 100

TYPE:float

Functions
to_dict
Python
to_dict() -> Dict[str, Any]

Convert metrics report to dictionary.

RETURNSDESCRIPTION
Dict[str, Any]

Dict[str, Any]: Dictionary representation of metrics

Source code in tenets/core/examiner/metrics.py
Python
def to_dict(self) -> Dict[str, Any]:
    """Convert metrics report to dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of metrics
    """
    return {
        "total_files": self.total_files,
        "total_lines": self.total_lines,
        "total_blank_lines": self.total_blank_lines,
        "total_comment_lines": self.total_comment_lines,
        "total_code_lines": self.total_code_lines,
        "total_functions": self.total_functions,
        "total_classes": self.total_classes,
        "total_imports": self.total_imports,
        "avg_file_size": round(self.avg_file_size, 2),
        "avg_complexity": round(self.avg_complexity, 2),
        "max_complexity": self.max_complexity,
        "min_complexity": self.min_complexity if self.min_complexity != float("inf") else 0,
        "complexity_std_dev": round(self.complexity_std_dev, 2),
        "documentation_ratio": round(self.documentation_ratio, 3),
        "test_coverage": round(self.test_coverage, 2),
        "code_duplication_ratio": round(self.code_duplication_ratio, 3),
        "technical_debt_score": round(self.technical_debt_score, 2),
        "maintainability_index": round(self.maintainability_index, 2),
        "languages": self.languages,
        "file_types": self.file_types,
        "size_distribution": self.size_distribution,
        "complexity_distribution": self.complexity_distribution,
        "largest_files": self.largest_files[:10],
        "most_complex_files": self.most_complex_files[:10],
        "most_imported_modules": self.most_imported_modules[:10],
    }

MetricsCalculator

Python
MetricsCalculator(config: TenetsConfig)

Calculator for code metrics extraction and aggregation.

Processes analyzed files to compute comprehensive metrics including size measurements, complexity statistics, quality indicators, and distributional analysis.

ATTRIBUTEDESCRIPTION
config

Configuration object

logger

Logger instance

Initialize metrics calculator with configuration.

PARAMETERDESCRIPTION
config

TenetsConfig instance with metrics settings

TYPE:TenetsConfig

Source code in tenets/core/examiner/metrics.py
Python
def __init__(self, config: TenetsConfig):
    """Initialize metrics calculator with configuration.

    Args:
        config: TenetsConfig instance with metrics settings
    """
    self.config = config
    self.logger = get_logger(__name__)
Functions
calculate
Python
calculate(files: List[Any]) -> MetricsReport

Calculate comprehensive metrics for analyzed files.

Processes a list of analyzed file objects to extract and aggregate various code metrics, producing a complete metrics report.

PARAMETERDESCRIPTION
files

List of analyzed file objects

TYPE:List[Any]

RETURNSDESCRIPTION
MetricsReport

Comprehensive metrics analysis

TYPE:MetricsReport

Example

calculator = MetricsCalculator(config) report = calculator.calculate(analyzed_files) print(f"Average complexity: {report.avg_complexity}")

Source code in tenets/core/examiner/metrics.py
Python
def calculate(self, files: List[Any]) -> MetricsReport:
    """Calculate comprehensive metrics for analyzed files.

    Processes a list of analyzed file objects to extract and aggregate
    various code metrics, producing a complete metrics report.

    Args:
        files: List of analyzed file objects

    Returns:
        MetricsReport: Comprehensive metrics analysis

    Example:
        >>> calculator = MetricsCalculator(config)
        >>> report = calculator.calculate(analyzed_files)
        >>> print(f"Average complexity: {report.avg_complexity}")
    """
    self.logger.debug(f"Calculating metrics for {len(files)} files")

    report = MetricsReport()

    if not files:
        return report

    # Collect raw metrics
    self._collect_basic_metrics(files, report)

    # Calculate distributions
    self._calculate_distributions(files, report)

    # Identify top items
    self._identify_top_items(files, report)

    # Calculate derived metrics
    self._calculate_derived_metrics(files, report)

    # Calculate language-specific metrics
    self._calculate_language_metrics(files, report)

    # Estimate quality indicators
    self._estimate_quality_indicators(files, report)

    self.logger.debug(f"Metrics calculation complete: {report.total_files} files")

    return report
calculate_file_metrics
Python
calculate_file_metrics(file_analysis: Any) -> Dict[str, Any]

Calculate metrics for a single file.

Extracts detailed metrics from a single file analysis object, providing file-specific measurements and statistics.

PARAMETERDESCRIPTION
file_analysis

Analyzed file object

TYPE:Any

RETURNSDESCRIPTION
Dict[str, Any]

Dict[str, Any]: File-specific metrics

Example

metrics = calculator.calculate_file_metrics(file_analysis) print(f"File complexity: {metrics['complexity']}")

Source code in tenets/core/examiner/metrics.py
Python
def calculate_file_metrics(self, file_analysis: Any) -> Dict[str, Any]:
    """Calculate metrics for a single file.

    Extracts detailed metrics from a single file analysis object,
    providing file-specific measurements and statistics.

    Args:
        file_analysis: Analyzed file object

    Returns:
        Dict[str, Any]: File-specific metrics

    Example:
        >>> metrics = calculator.calculate_file_metrics(file_analysis)
        >>> print(f"File complexity: {metrics['complexity']}")
    """

    # Safely determine lengths for possibly mocked attributes
    def _safe_len(obj: Any) -> int:
        try:
            return len(obj)  # type: ignore[arg-type]
        except Exception:
            return 0

    metrics = {
        "lines": self._safe_int(getattr(file_analysis, "lines", 0), 0),
        "blank_lines": self._safe_int(getattr(file_analysis, "blank_lines", 0), 0),
        "comment_lines": self._safe_int(getattr(file_analysis, "comment_lines", 0), 0),
        "code_lines": 0,
        "functions": _safe_len(getattr(file_analysis, "functions", [])),
        "classes": _safe_len(getattr(file_analysis, "classes", [])),
        "imports": _safe_len(getattr(file_analysis, "imports", [])),
        "complexity": 0,
        "documentation_ratio": 0.0,
    }

    # Calculate code lines
    metrics["code_lines"] = metrics["lines"] - metrics["blank_lines"] - metrics["comment_lines"]

    # Extract complexity
    if hasattr(file_analysis, "complexity") and file_analysis.complexity:
        metrics["complexity"] = self._safe_int(
            getattr(file_analysis.complexity, "cyclomatic", 0), 0
        )

    # Calculate documentation ratio
    if metrics["code_lines"] > 0:
        metrics["documentation_ratio"] = self._safe_float(metrics["comment_lines"]) / float(
            metrics["code_lines"]
        )

    # Add language and path info
    metrics["language"] = getattr(file_analysis, "language", "unknown")
    raw_path = getattr(file_analysis, "path", "")
    # Coerce path and name robustly for mocks/Path-like/str
    try:
        metrics["path"] = str(raw_path) if raw_path is not None else ""
    except Exception:
        metrics["path"] = ""
    try:
        # Prefer attribute .name when available
        if hasattr(raw_path, "name") and not isinstance(raw_path, str):
            name_val = raw_path.name
            metrics["name"] = str(name_val)
        elif metrics["path"]:
            metrics["name"] = Path(metrics["path"]).name
        else:
            metrics["name"] = "unknown"
    except Exception:
        metrics["name"] = "unknown"

    return metrics

Functions

calculate_metrics

Python
calculate_metrics(files: List[Any], config: Optional[TenetsConfig] = None) -> MetricsReport

Convenience function to calculate metrics for files.

Creates a MetricsCalculator instance and calculates comprehensive metrics for the provided files.

PARAMETERDESCRIPTION
files

List of analyzed file objects

TYPE:List[Any]

config

Optional configuration (uses defaults if None)

TYPE:Optional[TenetsConfig]DEFAULT:None

RETURNSDESCRIPTION
MetricsReport

Comprehensive metrics analysis

TYPE:MetricsReport

Example

report = calculate_metrics(analyzed_files) print(f"Quality score: {report.quality_score}")

Source code in tenets/core/examiner/metrics.py
Python
def calculate_metrics(files: List[Any], config: Optional[TenetsConfig] = None) -> MetricsReport:
    """Convenience function to calculate metrics for files.

    Creates a MetricsCalculator instance and calculates comprehensive
    metrics for the provided files.

    Args:
        files: List of analyzed file objects
        config: Optional configuration (uses defaults if None)

    Returns:
        MetricsReport: Comprehensive metrics analysis

    Example:
        >>> report = calculate_metrics(analyzed_files)
        >>> print(f"Quality score: {report.quality_score}")
    """
    if config is None:
        config = TenetsConfig()

    calculator = MetricsCalculator(config)
    return calculator.calculate(files)