Our Methodology

At Karmameter, we believe that public trust must be earned through rigorous, verifiable data. Our methodology is built on a commitment to absolute objectivity, transparency, and data integrity. We do not editorialize; we quantify.

cloud_download

1. Data Acquisition

We aggregate raw data from over 12,000 unique municipal and federal sources. Our automated scrapers run daily cycles to ingest records from disparate formats, ensuring real-time accountability.

Ingestion Sources

description
PDF DisclosuresAsset declarations, expense reports
public
Official GazettesLegal notices, tender awards
meeting_room
Municipal PortalsMeeting minutes, attendance logs
gavel
Court RecordsPublic litigation involving officials
verified

2. Verification Process

Raw data is often messy or incomplete. We employ a multi-stage verification pipeline that combines algorithmic anomaly detection with human-in-the-loop auditing for high-stakes records.

verification_logic.py

def verify_record(record):

# Cross-reference with secondary source

if record.source_id not in trusted_sources:

flag_for_human_review(record)

return "PENDING_REVIEW"

# Check for statistical anomalies in spending

if record.amount > (avg_spend * 3.0):

record.anomaly_score = HIGH

  • Entity Resolution: Merging duplicate profiles (e.g. "J. Doe" vs "John Doe").
  • Anomaly Detection: Flagging budget outliers that deviate by >2 standard deviations.
  • Human Audit: A team of data analysts manually reviews all flagged items before publication.
calculate

3. Scoring Framework

The "Karma Score" is a normalized index from 0 to 100 representing an official's adherence to fiscal responsibility and transparency mandates. It is calculated via a weighted average of three core pillars.

40%

Fiscal Efficiency

Budget utilization vs. project completion rates.

35%

Transparency

Timeliness of disclosures and public record availability.

25%

Presence

Attendance in council meetings and voting records.

groups

4. Peer Review & Feedback

We recognize that algorithms can contain implicit biases. Our scoring logic is documented and available for structured review by qualified researchers and policy experts. We regularly invite external feedback to refine our weighting mechanisms and address potential blind spots.

info

Suggest a Modification

If you believe a specific metric unfairly penalizes rural municipalities or has other systemic issues, please reach out via our contact form or contact our Data Ethics Board.