Weddings, Rubrics, and IEPs: Falling in Love with Institutional Effectiveness

A bride and groom cake topper on a wedding cake

This past weekend, I went to a wedding. Despite not being able to wear the shoes of my choice, I love weddings. The food, music, free drinks, atmosphere, beautiful expressions of love, and free drinks!  And even though they are all unique, every time I attend a wedding, I can’t help but think about all the weddings I’ve been to before. I think about the great ones (a live, twelve-piece band!) and the ones that had a gaffe or two (running out of drinks before the start of dinner!). If you’ve ever watched the show Four Weddings, you know exactly what I mean. In each episode, four brides attend each other’s weddings and judge them. They apply a personal but structured system to evaluate the events, scoring each one against their expectations and experiences. The show is part competition, part chaos, and 100% proof that even a celebration of love is not exempt from a good rubric.

If someone asked you to attend a wedding and evaluate it, you’d probably be pretty excited; that sounds like a relatively easy and fun job. You, too, could probably quickly devise a wedding quality rubric. You would probably even have a fun time gathering evidence, scoring the rubric, analyzing your data, and (if you’re brave or silly enough) providing feedback to the lovely couple.

So why is it that if I asked you to look at your entire educational institution and evaluate it, you wouldn’t enthusiastically run to put on your black-tie attire? Sure, it’s a much bigger job than evaluating a wedding (and there aren’t free drinks), but you follow the same steps, just on a larger scale. Let’s walk down the aisle and try to fall in love with Institutional Effectiveness Plans (IEP).

What is an IEP?

Just like how every bride has a different vision of the perfect wedding (some are barefoot on a beach, others are rigidly scheduled and immaculately styled), every accreditor and regulatory agency may have slightly different expectations for what an Institutional Effectiveness Plan (IEP) should include. Some are broad and flexible, others are as prescriptive as a seating chart.

In this blog, we’re speaking generally about IEPs, knowing that your institution’s exact approach may need to align with your specific accreditor or state agency requirements. But, in general, an IEP is a living document that describes your institution, lays out its goals for everything from student learning to financial operations, and presents data to support your institutional quality. But it doesn’t stop there. Perhaps most importantly, your IEP also needs to show that you are doing something with your data. Providing an action plan describing what you will implement and change to improve your institution and outcomes closes the loop on this assessment cycle.

To do this well, we first have to define what we’re evaluating and set clear benchmarks. That’s like planning a wedding with a vision in mind; knowing what kind of experience you’re aiming for helps guide every decision. Then comes the follow-through: collecting and analyzing the data (your version of post-wedding feedback), and using it to decide what worked and what didn’t (would you still hire that DJ?). That’s the heart of an IEP. It’s a story your institution tells, grounded in evidence, about where you are, how things are going, and how you’re actively working to get better.

What Does an IEP Need to Look At?

Let’s return to our beloved Four Weddings for a moment because it offers a pretty solid framework for thinking about IEPs (a sentence I never thought I’d write). In each episode, four brides attend one another’s weddings and rate them in four categories: dress, venue, food, and overall experience. The categories are intentionally broad. While these categories are broad, they more or less encompass everything about a wedding, and they combine all aspects into more manageable categories. “Venue” alone could include items like guest comfort, band quality, access to amenities, and friendliness of staff. This approach helps turn a complex event into something you can actually assess.

So, how do we know if our institution is a proverbial “amazing wedding”? As we mentioned earlier, every accreditor or reviewer might have slightly different expectations for what an Institutional Effectiveness Plan should include. But in general, many of them point to a few core areas that every institution should be regularly evaluating, things like academic quality, student support, infrastructure, and institutional operations.

These categories are broad and represent large swaths of a school’s operation. So, like the brilliant producers of Four Weddings, let’s break them down into more manageable pieces.

  • Academics

  • Academic Delivery (including technology or specialized equipment)

  • Student Support Services

  • Financials

  • Marketing (including Admissions)

  • Any other major operational or administrative area that might be specific to your institution

Exactly what goes into each of these categories depends on the size, scope, and mission of your institution. It wouldn’t be fair to assess a courthouse ceremony the same way you assess a 500-person black tie shindig (formal term for fancy). Your institution might only have one or two student support representatives, whereas a large university might have ten and a whole department (or two).

How Does an IEP Assess Effectiveness?

Part 1: Benchmarking

Back to the fun stuff. How does a bride/wedding critic know what’s “good” when it comes to a wedding? It starts with inspiration: Pinterest boards, bridal magazines, Instagram reels, TikTok stories, and the like. These sources help shape expectations and set a standard for what the planner wants their event to look and feel like. If the goal is a Beatrix Potter-inspired garden ceremony with vintage flair and a killer cheese board, then everything else is measured against that vision.

Unfortunately, I haven’t seen any copies of Today’s Higher Education Administrator magazine at the grocery store lately. Without magazine clippings, how do we set benchmarks for our institutions? A good starting place would be to check on your specific accreditor’s documentation. They may have established benchmarks for certain metrics, like graduation or retention rates.  Other useful means of benchmarking include using data from IPEDS or the National Student Clearinghouse to help set your minimum or baseline standards.

Absent prescribed metrics, you have a couple of options for setting your own benchmarks. First, look at your past data to figure out what a reasonable minimum expectation might be. For example, if you are assessing how effective your career services office is, you might want to look at what percentage of your soon-to-graduate students have used their interview preparation services. If your rate over the last three years has been 39%, 47%, and 42%, then you might set a benchmark of 40% with a goal to increase that number to 50%.

One other consideration for setting a benchmark is to see what you can learn by reviewing your peer institutions’ data. There might be publicly available research, or institutions might publish their own data that you can look at to see what is feasible for your school. Be sure to document the process you use to set your benchmarks.

Part 2: Collecting Data

The brides in Four Weddings, whether they know it or not,  embark on a journey of data collection that involves both quantitative and qualitative data. With their metrics identified and benchmarks (expectations for minimum performance) in mind, they must collect data to establish whether something was successful or not. They’ll use data like the number of guests who stay until the end of the wedding, guest reviews of the food, the number of times the Cupid Shuffle was played, and whether the dress met their expectations. Each piece of data aligns with one of the categories they were assigned to judge.

While their judgment isn’t exactly scientific, it is grounded in the same principles that IEPs should use to assess performance. After all, what is the higher education version of all your wedding guests staying until the end of the wedding if not institutional retention rate? To determine what kind of data we need to collect, let’s start by looking at the two categories of data.

Quantitative data (numbers) show what is happening. It includes metrics such as course completion rates, retention, assessment results, and satisfaction survey averages. These numbers allow institutions to see trends over time and to compare results to goals or benchmarks. We might also call this kind of data direct evidence.

Qualitative evidence (words) provides the context behind those numbers. It includes student focus groups, faculty reflections, open-ended survey responses, and peer reviews. While quantitative data can show that completion rates are improving, qualitative data might reveal that students still struggle with workload or communication. The two types of evidence complement each other. This kind of data might also be referred to as indirect evidence.

Scores on final assessments or a capstone class are frequently used as a piece of quantitative data to assess overall student learning. If you can further break that piece of data down to see how students scored in the various parts of the individual learning outcomes, you’ll be able to see where learning was highly impacted and where there might need to be some curriculum revision. Student feedback on a final evaluation survey might provide you with insights into support students may have wanted or courses they felt were less effective in their learning.

Pairing at least two pieces of data together to tell the story of your successes and the areas where you can improve is key. Just providing numbers without deeper analysis of how those data points reflect reality doesn’t provide the value that your IEP should or offer guidance on next steps for improvement.

Insight into Action

I’m afraid I must abandon our wedding analogy here. While the newly wedded couple might take time to reflect on their wedding, they are not going to engage in the same level of post-data collection action planning that an IEP is designed to do.

Most accreditors have a standard that calls for institutions to develop action plans based on what they have learned from institutional assessment. This is the point where institutional effectiveness becomes real. Data only has value when it leads to change. A strong IEP describes not just what the institution found, but what it did as a result. The most successful institutions create clear, specific action plans that link evidence to improvement.

For example, if course assessments show gaps in student writing skills, the action might involve introducing new writing modules or faculty workshops. If students report difficulty navigating the learning management system, the institution might improve its orientation or redesign course layouts. If staff surveys indicate low confidence in new technology tools, leadership might schedule targeted training sessions.

Each action should have a defined goal, a responsible party, and a way to measure success. It should also have a follow-up plan to determine whether the change produced the intended results. This is where many institutions falter. They make improvements but fail to evaluate the outcome. Accreditors expect to see a full cycle: identification, action, and follow-up.

Institutional culture plays a major role in whether this cycle succeeds. Leaders set the tone. When presidents, provosts, and department heads talk openly about data, share results across departments, and celebrate improvements, staff and faculty begin to see the value of continuous evaluation. Over time, it stops feeling like compliance and starts feeling like pride.

Saying “I do.”

Institutional effectiveness is not about perfection. It is about continuous improvement: learning, adapting, and striving to do better each year. A well-designed Institutional Effectiveness Plan is more than a set of reports. It is a living system that connects evidence to action. It is systematic, because it follows clear processes, and ongoing, because it never stops. It examines the full institution, balances numbers with narratives, and leads to measurable improvement.

When done well, institutional effectiveness changes how a school thinks. It shifts the focus from simply meeting external requirements to truly serving students more effectively. Accreditors may set the expectations, but the real value lies in how the process strengthens every part of the institution.

At its heart, a strong IEP is your institution’s way of saying “I do”—to accountability, to continuous improvement, and to the students and communities you serve. And like any good marriage, the commitment doesn’t end with the ceremony. It’s something you show up for, work on, and grow through, year after year, ‘til death (or retirement) do you part.

Previous
Previous

How Not to Write a Syllabus (and What to Do Instead)

Next
Next

State Authorization in Higher Ed: Strategy Over Stress