by Caroline Stauss and Dan Garmat
Dan and I have been doing some fireside chats (OK it’s basically talking with GoogleMeet on so we can log the transcript) about trends in AI. We edited a la New York Times Review of Books and make this feel like one of those riffy interviews to keep it simple. Bless your heart, meeting transcription!
This month’s riffing comes courtesy of Harvard Business Review’s recent article, AI doesn’t reduce work, it intensifies it by Aruna Ranganathan and Xingqi Maggie Ye. They say organizations are getting more productivity from a lot of tasks assisted by AI, like drafting, summarizing, debugging code. These projects might succeed, but they’re really just availing employees with more tasks and less specialty and to the point the workers never feel like they can really take a break from their work.

by Aruna Ranganathan and Xingqi Maggie Ye
Caroline: So is this surprising to you as someone who wants people to augment their use of AI?
Dan: Yes and no. I have seen in my own experience that there is a kind of flow to working, where it does feel like setting the boundaries is more difficult. I get AI’s blurred boundaries between work and non-work. In the past, you could kind of clear your mind, and part of the reason that worked is because naturally that was the only thing you could do with break time. There’s a natural boundary there, but that natural boundary has gone away, and basically work norms need to update to how this natural boundary doesn’t exist anymore.
Caroline: I also feel like just in the sense of the facility of using AI for other tasks bleeds in. I know after this, I’m going to take a work break and make cheesy gnocchi; I need to remember my roux recipe. Okay, I’m taking a break in my mind, but I’m also [on ChatGPT] plugging in, “If I have two and a half cups of milk how much else do I need for the roux?” Or, “Is it better to use manchego or cheddar?” There’s that slippery boundary too, if you’re using AI kind of like when the workforce started to use the internet, they weren’t just using it for work research. It’s becoming integrated into your life. So is that an issue then of tools or of human behavior?
Dan: Behavior is more immediately what’s going on. Basically we haven’t updated, we haven’t quite figured out how to work with this. There was a book I read back in high school called The Axemaker’s Gift. The point is, basically, with every technology there’s some amazing thing it can do. But then there’s the other side of the coin. With the invention of the axe you have the ability to chop down your local forest. With the new technologies, there is a need for new norms, essentially.

Outside of maybe academia or a highly intellectual workplace like Google, the idea of preserving your cognitive space and your cognitive mental energy hasn’t been a big priority. But now with AI, every workplace is super high-intensity. Practices that you would do in academia or Google and Microsoft at their peak–those kinds of practices that preserve the best from people without burning them out–start to have to work a little bit differently.
Caroline: Why do you think AI expands scope instead of narrowing it?
Dan: It’s so much easier to just try things. I’ve seen this with project management. I don’t have project management background but I can ask the AI to design a project for me. “Let’s come up with a reasonable approach. These are the things that have to be done and help me think it out.” It is like getting an intern project manager. They’re not amazing, but then I don’t feel like I need a project manager because I can do it myself a little bit and it’s good enough for this task. There used to be a task boundary, where basically I couldn’t do that at all.
Caroline: There were other people who are experts and you put your trust in them, whereas now you know generative AI can supplement what you think you need and makes you feel capable.
Dan: It certainly does a good job of saying, “Oh nice intuition. You’re the best project manager I’ve ever seen in my memory!”
Caroline: It convinces you, and it automatically saps away that expertise that somebody else is going to have good perspective and experience with.
Dan: Another example I can think of is translation. If you had a document you need to translate, not too long ago Google Translate would output something kind of garbage and you would need to hire someone. Now it does pretty well. It’s hard to draw a plausible line.
Caroline: There’s a term that comes up in the article: expectation inflation.
Dan: Expectations rising. Now the employee has access to AI assistance, so of course you can do this project management and you’ll do it well because now we can get you to do more things…
Caroline: … for the same wage.
Dan: Part of why I like this article is this is Harvard Business Review saying kind of part of leadership in the AI era is not taking that 19th century attitude of pushing it off to the workers.
Caroline: They talk about an “AI practice.” It made me think of communities of practice, which I have no idea what discipline that originates from (nb: anthropology), but it was basically co-opted in the education field as a professional learning community (PLC). In teaching, when tech helps you do something, there is expectation inflation. If tech can take on a task, “Okay, good, so [the teacher] better do more.” Now you can have more specific data on your students. You can update grades more so parents have this higher expectation that they’re going to know every minute of every day what percentage score their child has. You have this ability to use these different tools the school district has paid for and you’re expected to have this incorporated into your lesson plans even if you find it unwieldy. If what you had was working before, why reinvent the wheel?
One of the only things that helped me feel like I could try something new in tech is when we had time set aside for PLCs. Teachers would kind of, one teacher would specialize in a behavioral management tool and teach new stuff to the other teachers; you felt like you had support, and you wanted to buy into it more.
Do you think organizations would benefit from something like a professional learning community, time set aside for everyone to come to some sort of consensus on how to use AI and understand its updated tools?
Dan: The article suggests having an intentional AI practice. They don’t actually have a strong suggestion on what that practice is, what the structure is. A learning community might be appropriate for a certain organization and maybe not for another. They brought up human grounding, protecting time and space for listening and human connection. When you’re working with AI, it’s solo work, it’s just you and a computer. Getting out of that space and actually talking to somebody, especially if it’s a big project or an important output is important in order to be re-grounding in the human social world.
Caroline: Does unmanaged AI adoption create any sort of system risk?
Dan: The authors point out cognitive fatigue, burnout, and weakened decision-making. Weakened decision-making could be something as simple as clicking on a phishing email.
Caroline: AI skeptics would probably have similar criticisms but for different reasons, like you shouldn’t outsource cognition because quality goes down, whereas these authors interpret AI integration as working so hard you have decision fatigue, and that is what brings quality down. OK, last question. Is the real risk that AI replaces people or that it changes what we expect from them?
Dan: It changes what we expect. It accelerates what we expect of people. Expectations quietly grow. If you ask someone to do something that’s outside of their normal tasks and all of a sudden they’re able to do it, you might ask them again the next time. That quiet expansion of expectations is a real risk. Using AI broadly comes with a responsibility to engineer these things and to be sure good people aren’t burning themselves out.
For more on how leaders are thinking through AI’s impact on teams and goals, see Navigating AI’s Impact on Nonprofit Fundraising in 2026.
To explore the real human challenges in delegating tasks to AI thoughtfully, see Delegating to AI: A Small Experiment in Letting Go.
[In this interview, AI was used for magical, magical transcription, and less anything else]


Leave a Reply