Student Performance Analytics Suite

A Python-based analytics platform that ingests student performance data across campuses and programs, runs statistical comparisons, and auto-generates branded PowerPoint presentations for leadership review, turning raw spreadsheets into presentation-ready insights.

Multi-Campus Comparison engine
Auto-Gen Branded presentations
Statistical Significance testing
Screenshot coming soon

What was broken.

The institution operated across multiple campuses, online and on-ground, each with its own systems, data exports, and ways of tracking student performance. At the end of every term, program directors and academic leadership needed to understand how students were performing across all of these environments. But the data lived in isolated spreadsheets, formatted differently by each campus, with no standardized methodology for comparing results. Getting a clear picture of institutional performance meant spending hours wrangling files before a single comparison could even begin.

Leadership meetings compounded the problem. When board presentations or program reviews came around, someone had to manually pull pass rates, calculate category-level breakdowns, build charts in Excel, and paste everything into PowerPoint, slide by slide. The process took days, and the results were inconsistent. One person's version of a pass rate calculation might differ from another's. The charts looked different every quarter. There was no single source of truth, just a patchwork of spreadsheets and best guesses masquerading as institutional data.

The hours burned on manual work were bad enough. But the real cost was the decisions that weren't being made. Without reliable cross-campus comparisons or statistical validation, leadership couldn't confidently identify which programs were underperforming, which delivery modalities produced better outcomes, or whether a dip in pass rates was a statistical anomaly or a systemic trend. The institution needed a platform that could ingest raw performance data from any campus, apply consistent analysis, and produce finished presentations without a data science team or a week of prep.

Scattered Campus Data

Performance data lived in separate systems across campuses. Each one formatted differently, exported inconsistently, and impossible to compare without hours of manual cleanup.

No Standardized Methodology

Every person who built a report calculated pass rates differently. There was no consistent formula, no agreed-upon categories, and no way to validate that one quarter's numbers were comparable to the next.

Days of Manual Charting

Creating board-ready presentations meant days of pulling data, building charts in Excel, formatting slides in PowerPoint, and praying nobody asked for a different breakdown that would restart the entire process.

Decisions Without Data

Leadership made program-level decisions (resource allocation, curriculum changes, modality investments) based on anecdote and intuition because reliable comparative data simply didn't exist in a usable format.

How we solved it.

01

Built a Multi-Campus Ingestion Layer

Developed a Python-based data pipeline using Pandas and openpyxl that reads performance exports from every campus, online and on-ground, regardless of format inconsistencies. The system normalizes column names, standardizes grade categorizations, and merges data into a unified structure that enables direct cross-campus comparison without manual cleanup or reformatting.

02

Implemented Standardized Performance Metrics

Created a consistent analytical framework that calculates pass rates by program, category, course, and modality using the same formulas every time. The engine applies uniform definitions for what constitutes passing, failing, withdrawing, and incomplete. That eliminated the inconsistencies that plagued previous reports and made quarter-over-quarter comparisons actually meaningful.

03

Added Statistical Testing & Trend Analysis

Integrated statistical significance testing using NumPy to determine whether differences in pass rates between campuses, programs, or terms are real or just noise. The trend analysis engine tracks performance trajectories across multiple terms, so it flags statistically significant declines before they become systemic problems and validates improvements with confidence intervals rather than wishful thinking.

04

Automated Branded PowerPoint Generation

Used python-pptx to auto-generate complete, branded presentation decks from the analysis output. Each deck includes summary slides, campus comparison charts, program-level breakdowns, course drill-downs, trend visualizations, and takeaway slides, all formatted with institutional branding and ready to present without anyone opening PowerPoint manually.

Technologies Used

Python Pandas NumPy python-pptx openpyxl Statistical Analysis Data Visualization PowerPoint Generation

Still spending days turning spreadsheets into board presentations?

If your institution's performance data is scattered across campus systems and every quarterly review means starting from scratch with Excel and PowerPoint, there's a faster path. Let's talk about what automated analytics and presentation generation could look like for your programs.

Start a Conversation

What it actually does.

Multi-Campus Data Comparison

Ingests performance data from online and on-ground campuses, normalizes formatting differences, and produces side-by-side comparisons that show which modalities and locations deliver stronger student outcomes.

Program-Level Performance Analysis

Breaks down pass rates, failure rates, and withdrawal patterns by individual program, so leadership can identify which programs need attention and which are outperforming institutional benchmarks.

Pass Rate Calculations by Category

Applies standardized formulas to calculate pass rates across customizable categories: general education vs. core courses, electives vs. capstones. The metrics stay consistent and comparable every reporting cycle.

Course-Level Drill-Downs

Drills past program summaries into individual course performance to find specific courses with unusually high failure rates, significant campus-to-campus variance, or declining trajectories.

Auto-Generated PowerPoint Presentations

Produces complete, branded presentation decks with summary slides, comparison charts, program breakdowns, trend lines, and takeaways, formatted and ready for leadership review without manual chart-building.

Trend Analysis & Statistical Testing

Tracks performance across multiple terms with statistical significance testing to separate real declines from normal variance. Leadership can act on data instead of reacting to noise.

See it in action.

The numbers speak.

Data-Driven
Decision Making
Leadership shifted from intuition-based program decisions to evidence-backed strategy, with statistical validation behind every recommendation presented to the board
Days of
Prep Time Eliminated
Quarterly presentation preparation that consumed days of manual charting, formatting, and slide-building was reduced to a single automated run producing a complete branded deck
Targeted
Program Improvements
Cross-campus and course-level drill-downs revealed specific areas of underperformance, leading to precise interventions instead of broad curriculum overhauls
Consistent
Reporting Standards
Standardized formulas and automated calculations eliminated the inconsistencies that made previous reports unreliable. Every metric is now comparable across campuses, terms, and reporting cycles

What we learned.

01

The Hardest Part Isn't the Analysis, It's the Data Normalization

Every campus exported data differently. Column names varied. Grade categorizations weren't consistent. Some files had merged cells; others had hidden rows. The statistical analysis and presentation generation were straightforward engineering problems. But building a pipeline that could handle every format variation without someone babysitting it? That's where we spent 60% of development effort. Our takeaway: in institutional data projects, the ingestion layer is the product. Everything downstream is easy once the data is clean.

02

Statistical Significance Changes the Conversation

Before the platform existed, a 5% drop in pass rates triggered alarm and reactive interventions. After I added significance testing, leadership discovered that many of those fluctuations were within normal variance, not real trends. Some smaller shifts that had been dismissed turned out to be statistically significant early indicators. Adding p-values and confidence intervals to the conversation didn't just improve accuracy. It fundamentally changed how leadership decided where to focus.

03

Auto-Generated Reports Get Used; Manual Reports Get Postponed

When building a quarterly presentation required days of effort, it happened reluctantly and only when absolutely required: board meetings, accreditation deadlines. Once the platform could produce a complete branded deck from raw data in seconds, leadership started requesting analyses mid-term, for ad-hoc questions, for program reviews that previously wouldn't have warranted the effort. The same data that had been sitting in spreadsheets became part of routine strategic conversations simply because the friction of turning it into a presentation disappeared.

Want this for
your institution?

If your institution's performance data is scattered across campus systems and every board presentation means days of manual spreadsheet work, I've already built the platform that fixes this. Let's talk about what automated analytics, statistical testing, and branded presentation generation could look like for your programs.

No pitch. No pressure. Just a conversation about what might work.