Stuck in a rut with Claude and ChatGPT
By: Sarah Pita
Some plateaus are better than others. Pictured, the Colorado Plateau.
Do you ever feel like you have plateaued in your AI use? Sure, tools like Claude and ChatGPT are constantly unveiling new features and capabilities, but in the day to day, if you’re not an AI enthusiast, once you’ve learned how to use generative AI, it’s pretty easy to just stick with what you know.
There are a lot of reasons the plateau happens, personally and professionally. They can range from a general lack of inspiration to privacy concerns to facing limits on using personal AI tools at work.
Today, we’re going to talk about how to get unstuck—through a personal AI use audit. To do this, you basically ask your generative AI tool of choice to analyze what you use it for the most—and the least. Then ask for some suggestions on new things to try.
I tried this in both Claude and ChatGPT. I was surprised at how different the approach and results were for each tool. I left with some good ideas from each.
This exercise is designed for somebody who mostly uses the chat-based features of generative AI. Coding won’t show up here unless you have a chat about it—that lives in a different place on both platforms.
I enjoyed the iterative process of creating the prompts, and you can definitely do that too. I usually do not share my prompts verbatim, but in this case I’m making an exception, because the wording matters. So if you want to just copy and paste, you can definitely do that.
Claude is designed to be able to see and reference your past chats, sorted by chat. To request an audit, simply explain that you would like it to create a breakdown of your last 20 or 30 chats by primary usage across the categories of writing/editing; thinking/planning; research; building/creating; learning; coding; data analysis; and personal/other. Ask it to make some recommendations on new things to try in the neglected areas.
When I did this, I discovered that I was using Claude at least as much for thought partnership as for support for writing projects. I also learned that I am using it for no data analysis whatsoever, which Claude considered shocking and even tragic. There’s a good reason—I’m a development director, and I don’t have an enterprise Claude subscription at work. I can’t give any aspect of my organization’s data to Claude for analysis without an unacceptable privacy/security risk. But it does highlight a major friction point that emerges when employees don’t have enterprise subscriptions.
Here’s the prompt I used:
Claude Prompt
I'm doing an audit of my own AI usage patterns. Can you look through my recent chat history and pull my last 20 conversations? For each one, read enough to understand what I was actually doing — not just the topic, but the nature of the task. Then categorize each chat into one of these buckets:
· Writing/editing — I asked you to draft, revise, or review written content
· Research — I asked you to find, synthesize, or verify information
· Building/creating — I asked you to make something functional (a tool, app, spreadsheet, image, etc.)
· Strategic thinking/planning — I used you as a thought partner for decisions or planning
· Data/analysis — I asked you to work with data, numbers, or files
· Learning — I was trying to understand how a tool or concept works
· Personal/other — anything that doesn't fit the above
Pick the primary category based on what the conversation mostly accomplished, not just how it started.
Give me a summary table with counts and percentages, then the chat-by-chat breakdown.
Finally, based on the patterns you see, suggest 2-3 specific use cases I'm not currently exploring that would be relevant to my work or life. Be concrete — don't just name categories, describe what I'd actually do.
By contrast, ChatGPT does not “see” individual chats, so when I tried cutting and pasting my Claude prompt, it didn’t work. In fact, it took some back and forth to get what I wanted.
My first, vaguely written attempt resulted in one of those popular “tell me about me” ChatGPT things that go viral from time to time—gratifyingly flattering and praise-y. But not what I was looking for.
So I tried again, asking ChatGPT to write a prompt (which it described as “a little stern”) that would analyze my usage patterns and report on my dominant use patterns, and then suggest three experiments to try. I then opened another chat and ran it.
The results were much more detailed, and generally affirmed Claude’s observations about writing and thought partnership. I was intrigued by one of the recommendations: don’t just write things, try using ChatGPT for a multimodal support task such as creating slides or other visuals.
ChatGPT Prompt
Based only on the prior interactions, memory, and other context available to you in this chat, analyze how I have historically used ChatGPT. This is a usage audit, not a personality reading. Do not flatter me, psychoanalyze me, or make broad claims that go beyond the evidence. Stay focused on observable patterns in how I use ChatGPT. I want you to identify:
1. What I most often use ChatGPT for across categories such as writing, thought partnership, research, building/creating, coding, learning, image generation, organization, planning, or anything else that appears in my history.
2. What functional role ChatGPT tends to play in each category. For example: brainstorming, clarifying, outlining, revising, drafting, summarizing, troubleshooting, synthesizing, coaching, decision support, or production.
3. At what stage of work I tend to bring ChatGPT in: early ideation, middle-stage processing, late-stage revision, final polish, or other stages.
4. What I seem to do only occasionally.
5. What I seem not to do much, or not at all, with ChatGPT.
6. What notable absences, underused modes, or recurring blind spots show up in my patterns of use.
For each pattern, label it as one of the following: - strong pattern - occasional pattern - weak inference / uncertain Be concrete. Use specific examples or paraphrased recurring patterns from past chats where possible. Include a separate section titled: What I am not doing with ChatGPT. In that section, identify tasks, workflow stages, or use cases that appear absent, avoided, or underused. Also include a short section titled: Limits of this analysis. In that section, explain what information you do and do not actually have access to, so the audit does not overclaim.
End with: - a brief summary of my dominant uses - a brief summary of my underused or missing uses - 3 practical experiments I could try if I wanted to broaden or rebalance how I use ChatGPT
Conclusion: For me, creating experiments like auditing my own AI use is itself part of how I fight the plateau. But I got some useful insights from this exercise. For example, I’ve grown used to the constraints of not being able to engage in a whole category of AI use, data analysis, at work. The Claude audit highlighted that this is actually a major gap—an area where an enterprise-level tool could really help me. I did not expect my Claude prompt to utterly fail in ChatGPT, and appreciated figuring out a workable ChatGPT process. It yielded deeper insights and a couple of great suggestions I have already tried out.
Sometimes, to get out of a rut, it helps to take a couple of steps back and actually look at the rut. Give it a try and let me know what you think!
About the Author:
Sarah Pita is a fundraising professional with 25+ years of experience and a dynamic speaker who makes AI approachable and immediately useful for nonprofit teams. She leads practical, engaging trainings and workshops on using AI for fundraising and has presented at groups such as Women In Development NYC and at the AFP GPC Leading Philanthropy conference, among others. Sarah is currently Director of Development at the Center for Independence of the Disabled, New York.
Interested in an AI workshop or training? Contact Sarah here.

