metrics
¶
Full name: tenets.core.examiner.metrics
metrics¶
Metrics calculation module for code analysis.
This module provides comprehensive metrics calculation for codebases, including size metrics, complexity aggregations, code quality indicators, and statistical analysis across files and languages.
The MetricsCalculator class processes analyzed files to extract quantitative measurements that help assess code health, maintainability, and quality.
Classes¶
MetricsReportdataclass
¶
MetricsReport(total_files: int = 0, total_lines: int = 0, total_blank_lines: int = 0, total_comment_lines: int = 0, total_code_lines: int = 0, total_functions: int = 0, total_classes: int = 0, total_imports: int = 0, avg_file_size: float = 0.0, avg_complexity: float = 0.0, max_complexity: float = 0.0, min_complexity: float = float('inf'), complexity_std_dev: float = 0.0, documentation_ratio: float = 0.0, test_coverage: float = 0.0, code_duplication_ratio: float = 0.0, technical_debt_score: float = 0.0, maintainability_index: float = 0.0, languages: Dict[str, Dict[str, Any]] = dict(), file_types: Dict[str, int] = dict(), size_distribution: Dict[str, int] = dict(), complexity_distribution: Dict[str, int] = dict(), largest_files: List[Dict[str, Any]] = list(), most_complex_files: List[Dict[str, Any]] = list(), most_imported_modules: List[Tuple[str, int]] = list())
Comprehensive metrics report for analyzed code.
Aggregates various code metrics to provide quantitative insights into codebase characteristics, including size, complexity, documentation, and quality indicators.
ATTRIBUTE | DESCRIPTION |
---|---|
total_files | Total number of files analyzed TYPE: |
total_lines | Total lines of code across all files TYPE: |
total_blank_lines | Total blank lines TYPE: |
total_comment_lines | Total comment lines TYPE: |
total_code_lines | Total actual code lines (excluding blanks/comments) TYPE: |
total_functions | Total number of functions/methods TYPE: |
total_classes | Total number of classes TYPE: |
total_imports | Total number of import statements TYPE: |
avg_file_size | Average file size in lines TYPE: |
avg_complexity | Average cyclomatic complexity TYPE: |
max_complexity | Maximum cyclomatic complexity found TYPE: |
min_complexity | Minimum cyclomatic complexity found TYPE: |
complexity_std_dev | Standard deviation of complexity TYPE: |
documentation_ratio | Ratio of comment lines to code lines TYPE: |
test_coverage | Estimated test coverage (if test files found) TYPE: |
languages | Dictionary of language-specific metrics |
file_types | Distribution of file types |
size_distribution | File size distribution buckets |
complexity_distribution | Complexity distribution buckets |
largest_files | List of largest files by line count |
most_complex_files | List of files with highest complexity |
most_imported_modules | Most frequently imported modules |
code_duplication_ratio | Estimated code duplication ratio TYPE: |
technical_debt_score | Calculated technical debt score TYPE: |
maintainability_index | Overall maintainability index TYPE: |
Attributes¶
code_to_comment_ratioproperty
¶
Calculate code to comment ratio.
RETURNS | DESCRIPTION |
---|---|
float | Ratio of code lines to comment lines TYPE: |
avg_file_complexityproperty
¶
Calculate average complexity per file.
RETURNS | DESCRIPTION |
---|---|
float | Average complexity across all files TYPE: |
quality_scoreproperty
¶
Calculate overall code quality score (0-100).
Combines various metrics to produce a single quality indicator.
RETURNS | DESCRIPTION |
---|---|
float | Quality score between 0 and 100 TYPE: |
Functions¶
to_dict¶
Convert metrics report to dictionary.
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any] | Dict[str, Any]: Dictionary representation of metrics |
Source code in tenets/core/examiner/metrics.py
def to_dict(self) -> Dict[str, Any]:
"""Convert metrics report to dictionary.
Returns:
Dict[str, Any]: Dictionary representation of metrics
"""
return {
"total_files": self.total_files,
"total_lines": self.total_lines,
"total_blank_lines": self.total_blank_lines,
"total_comment_lines": self.total_comment_lines,
"total_code_lines": self.total_code_lines,
"total_functions": self.total_functions,
"total_classes": self.total_classes,
"total_imports": self.total_imports,
"avg_file_size": round(self.avg_file_size, 2),
"avg_complexity": round(self.avg_complexity, 2),
"max_complexity": self.max_complexity,
"min_complexity": self.min_complexity if self.min_complexity != float("inf") else 0,
"complexity_std_dev": round(self.complexity_std_dev, 2),
"documentation_ratio": round(self.documentation_ratio, 3),
"test_coverage": round(self.test_coverage, 2),
"code_duplication_ratio": round(self.code_duplication_ratio, 3),
"technical_debt_score": round(self.technical_debt_score, 2),
"maintainability_index": round(self.maintainability_index, 2),
"languages": self.languages,
"file_types": self.file_types,
"size_distribution": self.size_distribution,
"complexity_distribution": self.complexity_distribution,
"largest_files": self.largest_files[:10],
"most_complex_files": self.most_complex_files[:10],
"most_imported_modules": self.most_imported_modules[:10],
}
MetricsCalculator¶
Calculator for code metrics extraction and aggregation.
Processes analyzed files to compute comprehensive metrics including size measurements, complexity statistics, quality indicators, and distributional analysis.
ATTRIBUTE | DESCRIPTION |
---|---|
config | Configuration object |
logger | Logger instance |
Initialize metrics calculator with configuration.
PARAMETER | DESCRIPTION |
---|---|
config | TenetsConfig instance with metrics settings TYPE: |
Source code in tenets/core/examiner/metrics.py
Functions¶
calculate¶
Calculate comprehensive metrics for analyzed files.
Processes a list of analyzed file objects to extract and aggregate various code metrics, producing a complete metrics report.
PARAMETER | DESCRIPTION |
---|---|
files | List of analyzed file objects |
RETURNS | DESCRIPTION |
---|---|
MetricsReport | Comprehensive metrics analysis TYPE: |
Example
calculator = MetricsCalculator(config) report = calculator.calculate(analyzed_files) print(f"Average complexity: {report.avg_complexity}")
Source code in tenets/core/examiner/metrics.py
def calculate(self, files: List[Any]) -> MetricsReport:
"""Calculate comprehensive metrics for analyzed files.
Processes a list of analyzed file objects to extract and aggregate
various code metrics, producing a complete metrics report.
Args:
files: List of analyzed file objects
Returns:
MetricsReport: Comprehensive metrics analysis
Example:
>>> calculator = MetricsCalculator(config)
>>> report = calculator.calculate(analyzed_files)
>>> print(f"Average complexity: {report.avg_complexity}")
"""
self.logger.debug(f"Calculating metrics for {len(files)} files")
report = MetricsReport()
if not files:
return report
# Collect raw metrics
self._collect_basic_metrics(files, report)
# Calculate distributions
self._calculate_distributions(files, report)
# Identify top items
self._identify_top_items(files, report)
# Calculate derived metrics
self._calculate_derived_metrics(files, report)
# Calculate language-specific metrics
self._calculate_language_metrics(files, report)
# Estimate quality indicators
self._estimate_quality_indicators(files, report)
self.logger.debug(f"Metrics calculation complete: {report.total_files} files")
return report
calculate_file_metrics¶
Calculate metrics for a single file.
Extracts detailed metrics from a single file analysis object, providing file-specific measurements and statistics.
PARAMETER | DESCRIPTION |
---|---|
file_analysis | Analyzed file object TYPE: |
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any] | Dict[str, Any]: File-specific metrics |
Example
metrics = calculator.calculate_file_metrics(file_analysis) print(f"File complexity: {metrics['complexity']}")
Source code in tenets/core/examiner/metrics.py
def calculate_file_metrics(self, file_analysis: Any) -> Dict[str, Any]:
"""Calculate metrics for a single file.
Extracts detailed metrics from a single file analysis object,
providing file-specific measurements and statistics.
Args:
file_analysis: Analyzed file object
Returns:
Dict[str, Any]: File-specific metrics
Example:
>>> metrics = calculator.calculate_file_metrics(file_analysis)
>>> print(f"File complexity: {metrics['complexity']}")
"""
# Safely determine lengths for possibly mocked attributes
def _safe_len(obj: Any) -> int:
try:
return len(obj) # type: ignore[arg-type]
except Exception:
return 0
metrics = {
"lines": self._safe_int(getattr(file_analysis, "lines", 0), 0),
"blank_lines": self._safe_int(getattr(file_analysis, "blank_lines", 0), 0),
"comment_lines": self._safe_int(getattr(file_analysis, "comment_lines", 0), 0),
"code_lines": 0,
"functions": _safe_len(getattr(file_analysis, "functions", [])),
"classes": _safe_len(getattr(file_analysis, "classes", [])),
"imports": _safe_len(getattr(file_analysis, "imports", [])),
"complexity": 0,
"documentation_ratio": 0.0,
}
# Calculate code lines
metrics["code_lines"] = metrics["lines"] - metrics["blank_lines"] - metrics["comment_lines"]
# Extract complexity
if hasattr(file_analysis, "complexity") and file_analysis.complexity:
metrics["complexity"] = self._safe_int(
getattr(file_analysis.complexity, "cyclomatic", 0), 0
)
# Calculate documentation ratio
if metrics["code_lines"] > 0:
metrics["documentation_ratio"] = self._safe_float(metrics["comment_lines"]) / float(
metrics["code_lines"]
)
# Add language and path info
metrics["language"] = getattr(file_analysis, "language", "unknown")
raw_path = getattr(file_analysis, "path", "")
# Coerce path and name robustly for mocks/Path-like/str
try:
metrics["path"] = str(raw_path) if raw_path is not None else ""
except Exception:
metrics["path"] = ""
try:
# Prefer attribute .name when available
if hasattr(raw_path, "name") and not isinstance(raw_path, str):
name_val = raw_path.name
metrics["name"] = str(name_val)
elif metrics["path"]:
metrics["name"] = Path(metrics["path"]).name
else:
metrics["name"] = "unknown"
except Exception:
metrics["name"] = "unknown"
return metrics
Functions¶
calculate_metrics¶
Convenience function to calculate metrics for files.
Creates a MetricsCalculator instance and calculates comprehensive metrics for the provided files.
PARAMETER | DESCRIPTION |
---|---|
files | List of analyzed file objects |
config | Optional configuration (uses defaults if None) TYPE: |
RETURNS | DESCRIPTION |
---|---|
MetricsReport | Comprehensive metrics analysis TYPE: |
Example
report = calculate_metrics(analyzed_files) print(f"Quality score: {report.quality_score}")
Source code in tenets/core/examiner/metrics.py
def calculate_metrics(files: List[Any], config: Optional[TenetsConfig] = None) -> MetricsReport:
"""Convenience function to calculate metrics for files.
Creates a MetricsCalculator instance and calculates comprehensive
metrics for the provided files.
Args:
files: List of analyzed file objects
config: Optional configuration (uses defaults if None)
Returns:
MetricsReport: Comprehensive metrics analysis
Example:
>>> report = calculate_metrics(analyzed_files)
>>> print(f"Quality score: {report.quality_score}")
"""
if config is None:
config = TenetsConfig()
calculator = MetricsCalculator(config)
return calculator.calculate(files)