Using AI Without Losing Your Mind
Slowing down with AI: reflections for people and organizations seeking clarity

I’ve been in Europe for a few days visiting old friends. Even here, the U.S. dominates the headlines. More than once, someone’s asked me: what does it feel like over there?
A collective sense of overwhelm? Or maybe it’s disorientation. The stories we relied on to make sense of our world are fraying. The news has started to blur; meaning struggles to catch up.
Qatari jet grifts, artificial refugee crises, crypto opportunists, federal funding freezes, student kidnappings, unyielding devastation in Gaza. Meanwhile, LinkedIn is feeding me the latest productivity hacks. A new AI tool by the hour. Perhaps you also feel the pressure to move faster, absorb more information, or optimize your life as the world speeds up. But clarity never comes from speed. It comes from how we think in the scarce moments of pause.
AI has been pitched as the answer to everything. Some days, it feels like for each problem AI solves, an existential one takes its place. But I’m not here to hype or to warn. Only to wonder if there is something uniquely human about how we think that AI can amplify, not replace. If AI can be a tool to light up patterns we’ve missed or challenge our blindspots.
This is a question I’ve been sitting with given our work with mission-driven teams solving mounting problems with fewer resources. Efficiency is scarcely the barrier they face; more often it’s a lack of shared vision. The goalpost shifts and the team struggles to catch up. A new political era emerges and progressive organizations scramble to find their footing.
If we look at AI’s potential beyond individual efficiencies, maybe we’ll find ways it can sharpen our ideas and find clarity through the chaos. But I don’t hear enough people talking about the role of discernment in using AI; knowing what strengthens your work and what’s better left behind. This piece explores how I’m thinking about AI within Worthmore’s work, and how we might use it with care to think more clearly.
I. Resist the temptation to outsource your thinking
A common misconception about AI is that it will do the hard work for you.
Yes, it can write proposals, generate code, and automate just about any task. But for many, the real challenge has never been doing the work, it’s figuring out what work matters. In many industries going through rapid evolution, knowledge workers often spend their time deciding what’s worth doing in the first place.
Kyla Scanlon has a great essay arguing that a valuable skill in the age of AI—especially for those without physical specialities—is the ability to think critically and contextually. (Crazy that this bears repeating!)
For all the power of AI to process data and generate content, I think it struggles at the edges of nuance. Though with news of AI’s growing persuasiveness and ability to decode the human psyche, who knows how long that will be true.
LLMs may have every text ever written at their disposal, but they’re not very good, at least right now, at telling us what to do in the context of changing markets, economic conditions, and the networks we exist within. For example, ChatGPT can help you transcribe a podcast, summarize key points, and store them for reference. It can’t tell you why those things may matter within the context of your work, team, investors, board, or industry. It can’t tell you how it relates to your purpose. Your why.
I think that something essential is lost when we sacrifice the “aha” moments that come from listening, or reading ourselves—from paying attention. Of sharing imperfect written reflections with our teams. Or writing an essay that sharpens our thoughts. LLMs can supplement our thinking, but the more we try to outsource it for speed, the harder it becomes.
II. Using AI to build your treasure map
Sangeet Paul Choudary argued that AI startups selling enterprise tools should sell treasure maps, not shovels. He’s making a similar argument: that the tools being sold to solve task-based problems will only help organizations get so far. What they really need are treasure maps. Guides to tell them where to dig in the first place.
Through our work, we’ve partnered with teams of all sizes, from big foundations to startup social enterprises and funds. Nine times out of ten, we’re approached to work on one thing, only to discover that the team’s real issue is hiding somewhere else.
We’ve worked with tech founders on their pre-launch business plan and fundraising strategy, only to discover misalignment between the founders’ fundamental interpretation of the product. We’ve designed a new service for a philanthropic intermediary and found that the team’s culture and capacity were the real constraints.
These are human problems. Messy, layered, and context-dependent. They don’t map neatly onto a to-do list. AI definitely can't resolve them on its own—which is why we still need deep thinkers who understand how to build great teams—but it can help us explore them as we build our treasure maps.
III. A few experiments: AI as a thought partner
We humans thrive in community. But with our work increasingly siloed and remote, many people don’t have a team around at all times, myself included. AI can’t fill the gap of working with a colleague to solve a tough problem, but it might help us reconstruct parts of it or simulate the perspectives of others. Here are a few things I’ve tried out.
If you’re building something with a partner or co-founder… Create a questionnaire (What’s my vision for this project? What does success look like?) and feed each of your responses into ChatGPT. Ask it to reflect on where your visions diverge. Ask it to provide questions that can help you discuss and align your strategies IRL. Ask it to explore assumptions underlying your partner’s ideas.
If your team has a strategic plan with KPIs… Feed it into an LLM and ask it where your KPIs may be too ambitious or fall short of your stated strategy. Ask it to think about your strategy from the vantage point of an investor, or a community stakeholder—what might they say? Ask Perplexity to conduct research for a market analysis of others in your field, and then ask Claude to re-assess your strategy in that light. Finally, ask it to summarize everything you learned in communication for your team written in your voice (which it can learn through enough training).
If you process out loud or on the go... Experiment with dictating thoughts with Wispr Flow and feed your reflections into a custom GPT that you can chat with about yourself. Use it as a critic, counterpoint, or strategist.
If your Slack conversations are feeling crunchy… Maybe you’re not gelling with a colleague or misunderstanding what they’re asking for. Use AI to decode tone, ask for distilled meaning, or get feedback on your communication style.
There are certainly ways to level this up with custom GPTs, but you can go far with a bit of creative, exploratory prompting.
IV. Intentional, not automatic
LLMs often make things up. We tend to catch the obvious lies, but more subtle errors, like ChatGPT misreading a meeting transcript or Claude summarizing a team decision out of context, are harder to spot. These aren’t just minor annoyances; they shape decisions. And as we rely on AI more, our critical thinking muscles risk atrophy.
So instead of rushing to automate, I’ve tried to slow down. For me, using AI well has meant staying in the loop, not just for accuracy, but for injecting my own discernment. It means treating these tools less like an answer engine and more like a junior colleague: capable, fast, but in need of some oversight.
Learn multi-step prompting… Just like you'd guide a teammate through early drafts before finalizing a proposal or plan, you can walk an LLM through a process step-by-step. Ask it to pause and check in after each section before continuing its analysis. If you’re using a tool like Deep Research to answer a complicated question, start with the blueprint of an idea, then the outline, then the full plan, refining as you go.
Reflect on the specifics of what you want… Maybe you want a graduate-level analysis written in simple prose. Maybe you want concise bullet-points and takeaways for a meeting that capture the nuance of each participant’s contributions. Maybe you’re annoyed by all the cheery emojis ChatGPT spits out. Adding this context takes time but pays off in precise outputs.
Teach the tools about what you want… While the capabilities of AI memory are always changing and vary by tool, it’s helpful to explore how it works. In ChatGPT, you can ask it to remember your tone, formatting, or working style across sessions. This may change in the future (OpenAI wants to develop frictionless “AI systems that get to know you over your life”) but for now, it’s worth building awareness of how each tool can be trained.
V. Final thoughts
The workplace is only going to get weirder. College students are cheating their way through college with AI to no fault of their own. Most people I know in their twenties and thirties spend their entire days with a split browser opened to an LLM. But it was never going to be enough to automate our meeting notes or shave a few hours off a task. Time saved doesn’t matter if we don’t know how to use it.
The question persists: how do we make meaning in a world with more information and rapid economic transformation, with or without AI? How do we slow down enough to know where we’re going?
These tools are built to mirror us. They learn from what we say, how we say it, and sell that reflection back to us. That might seem harmless, but when it comes to our work, it risks reinforcing what we already believe. More and more, I see that the danger isn’t that AI will replace us. It’s that it will leave us incurious and passive.
The most “valuable” work in this moment won’t come from those who generate the most content or automate the most tasks. It will be from those who hone their eye for discernment. Or those who string together meaning and ask better questions to help their organizations deftly navigate change.
I’ll continue to experiment with AI in our work, but selectively. Trying not to let it flatten my sense of direction. Because it’s easy to be seduced by smooth answers, even when the most important question is still unresolved.
Because finding clarity is the work. And it’s still ours to do.