Can AI Replace Your Higher Ed Consultant? A Post-Conference Reality Check

A hand holds up a phone with an AI screen asking "How can I help you?"

We just got back from the DEAC Annual Conference in Washington DC. Susan presented "Consultants, Contracts, and Caution Tape" (with extensive llama metaphors), and I presented "All Hands on Deck: Strategic Planning Tools for Smooth Sailing" (with extensive pirate theming). We learned, we chatted, we ate, we networked, and if you were there too, it was probably great meeting you! As unexpected as a llama or a pirate might have been at a higher ed conference, one topic was decidedly less surprising: our dear friend, Generative AI.

In sessions, in hallway conversations, over coffee, it was the inescapable topic. But it became especially unavoidable for us during Susan's Q&A, where the topic took on the form of a direct question: Can't ChatGPT basically do all of this consulting stuff now?

It's a fair question. And, as your friendly neighborhood higher ed consultants committed to authenticity at all times, it would be intellectually dishonest to dodge it. Susan answered it in the room, and now we're going to answer it here (in much more detail).

The TL;DR : No, ChatGPT (or Claude or Gemini) cannot do our job. But the longer version is more interesting, and it has implications for every institution wondering whether to lean harder on AI to draft its accreditation efforts, its strategic plan, or its self-study.

What AI Actually Does Well (Credit Where It's Due)

AI hasn't changed how we work. And by “we,” I mean both society and EduCred Services. We use it. Everyone we know uses it. It’s the best proofreader a girl could ask for. Plus, it's genuinely useful for cleaning up a clunky paragraph until it succinctly communicates the message, generating a first draft of meeting notes from a recording, summarizing a long policy document so you can find the part you actually need, brainstorming names or framings when you're stuck, and quickly answering questions like "what's the exact quote about X on page 3?”

If you're not using AI for any of this, you're probably working harder than you need to. We're not here to tell you to stop. We're here to tell you what it can't do, and why pretending it can will eventually cost you.

Why Strategic Work Can't Be Generic: A Tour of My Pirate Slide Deck

To explain why ChatGPT (and its friends) struggles with this kind of work, let me back up and walk you through what I spent 45 minutes on at the conference.

In my session, I argued that good strategic plans almost write themselves if you build them on top of four tools used in the right order: PESTLE, SWOT, TOWS, and the 4Ps. The whole point of this chain is that each tool builds on the last, so by the time you arrive at your goals, they've been pressure-tested against your external environment, your internal capacity, and your institution’s mission.

For the demo, I created a fictional institution called the Maritime Academy of Strategic Thievery, lovingly known as “MAST”. Their mission was to train students to "navigate complex aquatic environments, redistribute acquisitions with purpose, and lead with calculated audacity." Yes, it was cute. Stay with me.

Now, could ChatGPT do a PESTLE? Honestly, probably. A PESTLE is an environmental scan (political, economic, social, technological, legal, and environmental) observations about the world your institution operates in. AI can pull from a wide pool of public information and produce something usable. The catch is that it only knows what you tell it to look at, so unless you guide it carefully, you're going to get a pile of observations that may or may not actually be relevant to your specific context. That's a real risk for institutions that don't already know what to filter out.

But let's say you get past that. You hand AI your strengths and weaknesses and ask it to build a SWOT. Here's where things start to break down. Generative AI knows what you tell it. If you say your faculty are great, that's what it will say. If you tell it your enrollment is stable, it will repeat that back to you. It cannot independently assess whether the information you provided is actually true, actually a strength, or actually meaningful in your competitive landscape. That kind of clear-eyed institutional self-evaluation is exactly why institutions have leadership in the first place. It is not (and should not be) delegable.

And then we get to the TOWS, which is where the strategic decisions live. It's where you take an external opportunity and an internal strength and figure out, specifically, what your institution should do about the intersection. This takes sound judgment, creative problem-solving, and deep familiarity with all departments working together to produce a goal that is unique to your institution.

That's how MAST ended up with goals like launching a freshwater operations concentration, pursuing licensure in inland-waterway states, and starting a practitioner-hosted podcast called Pirate Radio. None of those are obvious at first glance. None of those would come out of a generic prompt. They came from sitting with MAST's actual situation long enough to see where the threads connected, and then making creative calls about what to do at the intersections.

Strip that creative judgment out, and you're left with a strategic plan that could belong to any school (which is to say a strategic plan that doesn't actually belong to anyone).

The Bigger Problem: AI Creates Sameness

My presentation highlighted just one area where generative AI runs into trouble, but there are more.

Generative AI is, by its nature, a sameness machine. It works by predicting what should come next based on what's most statistically likely given billions of examples of similar content. That's genuinely useful for a lot of tasks, but it’s also exactly the wrong tool for any task where distinctiveness or strategy is the goal.

If every institution uses AI to write its mission statement, strategic plan, program descriptions, etc., prospective students will scroll through their search results and see the same words, the same phrasings, the same promises, over and over. We’re already starting to see this in higher education institutions. AI has tells. It uses a certain cadence, has a fondness for tidy parallel sentence structures, and tends to reach for the same connector phrases. Between you and me, I’ve seen a lot of "comprehensive frameworks" and "robust processes" lately. Once you've read enough of it, you can spot it almost immediately. The more institutions lean on it for their public-facing language, the more their websites, catalogs, and strategic plans start to sound like echoes of one another.

Accreditors and evaluators read self-studies, narratives, and supporting documents for a living. They can spot AI cadence from across the room, and when a self-study reads like a chatbot wrote it, it raises a much more uncomfortable question: if AI wrote this, who's actually running the institution?

Students, especially Gen Z and Gen Alpha, are not actually short on options. They are short on reasons to choose any of them. When everything looks the same and sounds the same and promises the same outcomes, the only differentiator left is price. That's a race to the bottom that no institution we work with wants to run.

What a Good Consultant Actually Does With All This

Susan’s whole presentation was about what makes a consulting engagement actually worth the investment, and her answer kept coming back to one word: tailored.

In practical terms, that means a few things. We learn about your institution before we recommend anything. We sit with your mission long enough to understand what makes it specific, and we push back when something sounds like every other school's version of itself. We ask questions like, "But what does innovative actually mean at your institution?" because innovative without a definition is filler. (Susan made this exact point at the conference: every institution thinks its version of innovative is obvious. It isn't. It needs to be defined, in your words, against your actual practices.)

We also act as what Susan calls a “compliance translator”. Accreditation has its own language and your institution has its own language. A good consultant moves between the two, taking the genuinely distinctive things you do and articulating them in a way that lets evaluators see how those things meet the standards. That work requires understanding both your operations and the regulatory framework deeply enough to map one onto the other. It is not a job that pattern-matching can do for you.

And, (this is one of Susan's points we feel the most strongly about), a good consultant is teaching and mentoring throughout the process, not building dependency. We are not auditioning for a permanent spot on your org chart. The goal is for you to be able to speak fluently about your policies, your data, your outcomes, and your distinctiveness when the site visit happens. We just help you get there.

So, to return to the original question: can ChatGPT do all of this consulting stuff now? No, it really can’t.

The entire point of this work is to help institutions become more like themselves, not less. AI pulls toward the average. We pull toward what makes you, you. Use AI for the things it's good at. Find some trending hashtags for your social media post. Brainstorm subject lines. Comb through a stack of résumés to make sure everyone has earned a PhD. But when the question is "what makes our institution worth choosing?" or "what should our strategic goals be?" or "how do we demonstrate compliance with our daily procedures?", that work needs to be human, particular, and grounded in evidence about your unique institution.

Higher ed has too much sameness already. The institutions leading the way in ten years are the ones that get specific now. The real work of figuring out who you are and what makes your institution worth choosing requires a human doing the thinking. It doesn't have to be us. It can be your provost, your faculty, your board, or another consultant entirely. But it should be someone who knows your institution well enough to make creative calls, ask uncomfortable questions, and push back. That was the real undercurrent to the AI conversations we heard all week, and it's the response we'll keep giving. Use AI for what it's good at; trust a human (or a llama) with the rest.

Next
Next

Accreditation Is in the Air (And We Will Be Too!)