“Yes, Chef!” Putting AI in its Station in Higher Ed

Chefs moving quickly around a busy restaurant kitchen

I sometimes have a hard time going out to dinner with people who have never worked in a restaurant. Not because they can't enjoy the experience or the food (of course they can). It's just that when you've worked behind the scenes, you know exactly how many things have to go right, and how many people have to execute their jobs perfectly, in order for a beautiful dinner to land in front of you at the perfect time and temperature.

If you’ve worked in a kitchen (or have watched The Bear, Chef, or most other cooking shows or movies), you’ve seen the brigade de cuisine system in action. It's the hierarchy of specialized roles that defines a fine dining kitchen. At the top sits the chef de cuisine, who runs the operation, sets the vision and menu, and is accountable for everything that leaves the pass. From there, responsibilities cascade down to the sous chef (who supervises the line), the chefs de partie (who run their own stations), the commis (junior cooks who work a single station under direction). Further down is the garçon de cuisine, whose job is "preparatory and auxiliary work for support."

The brigade works because every role has a defined scope, and authority sits with the people who have (hopefully) earned it through training and experience. Efficiency is certainly part of the goal, but only inasmuch as it is in service of hospitality and excellence. A chef de cuisine doesn't ask the garçon to revise the menu, and a sous chef doesn't ask the commis to decide whether tonight's wine pairing is seasonally appropriate. And yet, lately, it feels like that's what a lot of higher education has started doing with AI. You can't open your LinkedIn feed or sit through a faculty meeting these days without someone promising that AI will write your syllabi, grade your papers, build your strategic plan, and eventually just sit in the Provost's chair while the budget quietly balances itself in the corner. Ah, the efficiency lover’s dream. But even if that were possible (it isn’t), would we really want that?

The brigade system isn't valued because it helps line cooks sling soulless calories as fast as possible into people's open mouths. It works because it helps execute the chef de cuisine's vision consistently, from the produce selection down to the most minute detail of the flatware. A higher ed program isn't valuable because it just delivers content. It's valuable because the people designing it have made deliberate choices about what their students need, how their students learn, and what kind of graduates they want to send into the world. Strip out those deliberate human choices, and you haven't built a curriculum; you’ve just made a smaller Wikipedia.

I'm not against AI in higher education, but I'm absolutely against AI in the wrong job. Kept in its station and given the right work, AI can absolutely be your institution's garçon de cuisine (and even a great one!) AI never gets a hand cramp, never complains about the heat, never misses a shift, never accidentally cuts itself and then has to wear a surgical glove that slowly fills up with blood over the course of a shift (IYKYK). Here's where it earns its keep:

  • Catching bureaucratic drift. AI is phenomenal at catching the contradictions that creep in when twelve different department heads contribute to one Strategic Plan. It can identify where the Faculty Handbook contradicts the Student Catalog on page 247, saving you from a potentially awkward conversation with a site visit team regarding SAP policies.

  • Pulling details out of long documents. Need every reference to a specific accreditation standard buried in a 300-page Self-Evaluation Report? A master list of every textbook in use across forty sections of the same course? AI can pull that material in seconds while you're still trying to open the document.

  • Generating starting-point ideas. When you need brainstorming material (ten possible names for a new certificate program, or Bloom's-compliant action verbs to consider as replacements for "understand"), AI can give you a whole swath of options to react to. Some of them will be good; some will be terrible. It requires a human who understands the institution to make the differentiation between the two.

In a well-run kitchen, when a garçon shows up early, works hard, asks good questions, and produces clean, consistent work over months, the chefs notice. The garçon gets pulled aside and given small things to try. Maybe a simple sauce, a vegetable on the line, eventually, a station. Demonstrated skill in small things is taken (correctly) as a signal of latent potential for bigger things. Higher ed runs (or should run) the same way. A faculty member who teaches well, contributes to their department, mentors students, and publishes thoughtfully gets trusted with more. Likely course design, curriculum development, program direction, maybe even department chair or dean. Competence and dedication earn responsibility.

When we watch AI do the prep work well, our tendency is to treat it the same way we would a promising garçon or faculty member: Look at it!  It's so good at this, we should give it more responsibility. But the pattern that explains how humans grow into bigger and more complex work doesn't apply here, because AI doesn’t function the same way as a human.

Humans process the world top-down: we bring our entire accumulated context to every task. Every kitchen we've stood in, every conversation we've overheard, every meal we've eaten, every mistake we've made, all of it shapes how we approach the thing in front of us. But AI processes the world bottom-up. It has the prompt you've typed and the patterns from its training, and that's it. No integrated context, no internal map of how cooking works, no shared sense of what a restaurant or classroom even is. Every task starts fresh.

Think about what happens when a restaurant’s garçon de cuisine moves from chopping onions to preparing steaks. They don't start from zero. They bring knife skills they've sharpened on the onions, the rhythm of the kitchen they've absorbed during prep, every steak they've ever eaten, every time they've watched the chef de partie pull one off the grill and let it rest. They might fumble the first attempt, but they have a framework for noticing the fumble and adjusting. No two steaks are exactly the same. One ribeye is thicker than the next; one sirloin came out of the cooler ten minutes ago, another came out an hour ago. The salamander is hotter on the left side than the right. None of those differences get written down anywhere, but they are perceived in real time, by a person who has spent enough time around food to feel them. By the tenth steak, the garçon is cooking.

AI doesn't have any of that; it doesn’t even have the shared baseline of what a well-cooked (not to be confused with well-done!) steak is. AI's excellent prep work tells you only that the pattern-matching worked for the specific task you gave it. The moment you change the task, you are almost back at zero. You can certainly tweak prompts, emphasize certain aspects, and correct for certain tendencies. You can write a fifty-six-page document explaining exactly how your kitchen runs.  But the variables in play are endless (any 3 Body Problem fans here?), and the prompts are not. Every steak that lands on the line is different, every cohort that walks through your door is different. AI is a brilliant instructions-follower. It is a terrible improviser. Give it ambiguity (a question it wasn't briefed on, a situation that doesn't match the patterns in its training) in a moment that demands real-time judgment, and it will produce something that might look right but isn't actually right.

In this instance, there is no garçon getting promoted, because there is no garçon. There is only a machine that performs garçon-like tasks when told what to do, in exactly the way it was told.

The same principles apply when you develop a new course. A faculty member doesn’t start from zero. They bring years of office hours, conversations with colleagues who taught the prerequisite, memories of papers where students missed the same nuance in the instructions every semester. They might develop an assignment that misses the mark the first time, but they have a framework for noticing it flopped and adjusting; because, like steaks, no two cohorts are exactly the same. This year's freshmen came in with different gaps than last year's. The reading that landed beautifully in spring confuses everyone in the fall. The Tuesday section is quieter than Thursday's, for reasons no one can quite identify. A good curriculum developer and faculty member perceives all of this and then adjusts.

Think about what actually makes a meal (or an academic program) memorable. It isn't the efficiency of the operation, though efficiency certainly helps. It’s the synergy of every component and decision purposefully designed to create a transformative experience for the person on the receiving end. The goal is human connection. A transfer of knowledge, passion, and insight. Of course we want knowledgeable, skilled workers who can find good-paying jobs, but we want those things because we value the people, and we want them to be able to live their lives with dignity and stability. If we never actually connect with those lives, what’s it all for? If you automate the vision and intention out of higher education, why be a part of higher ed at all?

So use AI. Let it catch the contradictions in your catalog, pull the textbook list out of forty syllabi. Get the onions chopped! But keep your hands on the menu and the tasting spoon, your eyes on the room, and your unique vision for changing lives at the center of everything you do. The students who chose you chose that, not the algorithmic average of every other school they could have gone to. Decisions belong to the chefs (or the academic leadership), the humans who've spent years earning the right to make those calls. So, (to borrow a particularly applicable phrase from Gen Z) let them cook!

Next
Next

Can AI Replace Your Higher Ed Consultant? A Post-Conference Reality Check