an AI deep dive
my thoughts on it, how I use it (and WON'T use it for), and who I'm learning from
AI. Tell me - how did those two letters make you feel?
I’ve been spending an hour most days studying AI - staying on top of the news, tinkering around on Claude, and reading and learning.
Intelligence itself is inherently subjective, each person having their own definition. What we view as intelligent - analysis, a perspective based on our lived experience, open-mindedness and curiosity - feels deeply human and impossible to replicate.
Artificial intelligence, at its core, is pattern recognition and prediction based on all of the data it’s been fed. LLMs (large language models) and the tools built on them (Claude, ChatGPT, Gemini) are what we’re usually talking about when we talk about AI.
Everyone has an opinion on AI, from evangelists to oppositionists. I certainly have mine, and am always seeking to learn more (for personal use and to understand this industry). It’s increasingly difficult to filter the hot takes and opinions from deeper reporting or longform pieces, and this is my attempt at that.
This is a hybrid #5SmartReads and swipe file. My hope is that you learn something that you didn’t already know. That’s it.
Full transparency - I do use Claude and a few other apps, in ways that have genuinely helped me. I’ve also pulled back from these tools in some areas of my life, and have zero desire to use them in my writing and other work I’m protective of. I dig into them here.
Posts like these (swipe files, deep dives) are typically for my paid subscribers. Given how important this conversation is, I’ve removed the paywall from this particular post. If you value my work and content like this, I would be grateful if you upgraded your subscription (I’ve priced my Substack as low as the platform will let me). My deepest thanks to the 600+ subscribers who help me invest this time into Hyphenated by Hitha.
#5SmartReads on AI
The AI Doc: Or How I Became an Apocaloptimist
This documentary is outstanding. It’s the “what the heck is even AI, what are all its issues, what are its benefits?” primer told in such an honest, vulnerable, and beautiful way. The filmmaker takes you on his own confusing journey of understanding the technology that poises to transform our lives (for the better, the worse, the extreme) and honestly examine it. I jotted pages of notes as I watched it, and ultimately came away with more questions than I had when I first started watching it. For me, that’s the mark of something intelligent—I learned a lot, and I’m left wanting to learn more. It’s also just a beautiful piece of art.The Future of Everything is Lies, I Guess (Aphyr)
My friend Chelsea shared this resource in this video, and I was gripped the second I read “This is bullshit about bullshit machines, and I mean it…I am not trying to make nuanced, accurate predictions, but to trace the potential risks and benefits at play.”
It’s a very long read (one that I read over a couple of weeks), and I find Kyle’s bluntness and opinions refreshing. It’s a stark reminder that there’s no such thing as objective reporting, and this topic is as biased as it gets AND it’s important to read the perspectives from those you don’t fully agree with.Anthropic Is at War With Itself (The Atlantic)
Claude is my main AI tool, and I’ve talked about it here and and in more depth here. I was never a big ChatGPT user, but decided to play around on Claude more after reading Karen Hao’s Empire of AI (profiled here).
Anthropic was founded by the former leaders of OpenAI’s safety group, and on the surface, the company’s values mirror my own. But there’s no such thing as an ethical enterprise, particularly one who’s nearing a $1T valuation (a near 10X jump from their last round). This article does a great job of parsing through the conflicting words and the work of Anthropic.Is using AI unethical? (Risky Women)
The conversations around ”women using AI” have largely been dominated by the Mel Robbins/Reese Witherspoon commentary and skewering of it all.
I agree with some of the points, and I want to highlight that both the criticism and case made by Mel/Reese/etc are rooted in privilege that many folks don’t have. Rachel Rodgers’ case tackles all the well documented risks and drain of using AI, and outlines the cost of not using AI. This is a thoughtful, underreported essay that deserves the attention we’ve been giving others.The AI Collapse is Coming. Here is Why I am Optimistic (Shae O.)
Shae O. is one of my favorite researchers and thinkers on AI right now, and this is a hopeful, progressive prediction of the business of AI as it currently operates, and what we should build and grow from its ashes. She is a brilliant voice in this space, and I’ve learned a lot from her Critical Thinking Skills Guide and AI for Humanists and Beginners (and want to start building my own local AI to offload my Claude use).
Here are some bonus reads/resources that I found interesting and informative:
The Critical Thinking Lab (Upasna Gautam)
The Girlbossification of AI (The Cut)
With the RAISE Act, New York Aligns With California on Frontier AI Laws (Emissary | Carnegie Endowment)
Using AI responsibly means knowing when not to use it (The Conversation)
How I Use AI
In the spectrum of AI evangelist to vehement opposer, I’m somewhere in the middle. I use a handful of tools for very specific reasons, and purposefully subscribe to the lowest tiers to act as a check on my usage.
Monologue - this voice dictation tool is my most-used app these days. I yap my texts, emails, Substack posts, meeting recaps every moment I can (and my carpal tunnel has significantly eased up since I’m not typing nearly as much). I opted for Monologue over Wispr for its data privacy rules. While this app has been a game changer for me, it has truly transformed my father’s life. His Parkinson’s has made it difficult for him to type for longer stretches of time, and he’ll often be able to make his point in Telugu versus English. This app allows him to dictate his emails, technical reports, and patents with an ease I don’t think he’s ever had.
Claude - I won’t lie, Claude has been absolutely transformational for my life and my work. It’s helped me fill various roles - chief of staff at work, project organizer and data analyst in my content business, accountability coach to help me take care of myself, and emotional de-escalator.
Notion - I had Claude help me set up sub-agents + databases in Notion, so all of my yaps have a home and can be referred to at any given time. The best way to set this up is to tell Claude what you need, refine the scope, and then ask for step-by-step instructions (one step at a time) to set it up. I briefly shared how I did it here.
Superhuman - Superhuman has been my email client for 8 years now. I don’t use a ton of their AI features (suggested replies and follow up are the extend), but I swear by their split inbox (different inboxes within the screen, based on your needs), snippets (your own prewritten drafts), and the built-in calendar/availability sharing. My link gets you a free month trial.
Copilot - for work, as we’re on Microsoft already.
Every Claude project or app I created has the same reason: I couldn’t maintain the thing I needed (usually a recurring practice, task, or routine). Here are the AI projects I’m actively using:
Accountability partner: this was my first Claude project, and the one that’s actually changed my life for the better. The full post outlines how I got started and kept up with it for the first couple of months. Over time, I refined my daily check-ins with the project (once in the morning, once in the evening), had the project create a skills file for the project to remember who I am and the general goals (I update the skills file every month of so), and create an app to input my daily check-ins and analyze the wellness metrics I care about over time.
This tool has improved my health in every way - physical, mental, and emotional - and during one of the hardest stretches of my life. I got a little shit for it here, and I honestly don’t care.
I will also state that it is a complement to personalized care. I’ll start a chat in this project to give me a weekly summary I can send to my therapist, or to prepare a workout/nutrition/sleep summary for my doctor. It’s a both/and for me.Project organizer: I’m planting the seeds (to borrow Neha Ruch’s words) for the next chapter for Hyphenated by Hitha, and have a hair-brained idea come to me nearly every single day. I have a Hyphenated project where I’ll start a new chat for each idea and ask Claude to transcribe the idea and have me answer these questions about it (priority, time required, cost investment, team investment, why). Claude will then summarize my idea and import it into a Notion database, where I can refer to it when I’m ready. I have similar workflows for content ideas (for Instagram and Substack), the multi-hyphenate book I’ve been promising my agent for years (finally working on a sample chapter!), and resurrecting a beloved project I had to pause.
Editorial analyst: I pop my Instagram and Substack analytics in Claude every week and ask it to analyze them for the week and over the quarter. I use Claude’s analysis to build out my own content plan for the upcoming week.
Chief of staff: we have a very small, very lean team at work (and my rockstar head of operations manages our cash flow phenomenally). I created the following copilots to help me with specific tasks: meeting minutes (drafted from meeting transcriptions), 1:1 check-ins with my team, investor relations, and board management. It’s pretty minimal, but it helps me stay on top of all of these roles without forgetting a specific detail from a meeting, or why we decided to make a specific decision.
I use Copilot for this, as our company uses Office365 and everything stays in that ecosystem.App “developer”: I vibe-coded apps (and homepage) to support me in some very specific ways that I needed support in - in co-regulating my kids and myself, in staying connected with my husband, and in tracking my health on all metrics that mattered most to me. These apps have actually supported me in these ways, and I continue to use them every day.
I do not use it to write (I’ll have Claude do a light copy edit of a transcription), nor do I use it for content ideas or scripts. That’s all still me, and it will always be.
Here are a few self-imposed restrictions I’ve put on my Claude usage:
I have a Claude Pro account (recently downgraded from Max), and that meets most of my daily needs. If I’m updating my apps, I’ll purchase some extra usage/tokens, but that’s a once-a-quarter purchase.
I find that Claude is good about ending the conversation when you’ve gotten the answer, but I’ve instructed mine to do this even more ruthlessly (“if Hitha keeps asking you follow up questions but she has the next thing she needs to do identified, end the conversation and tell her to return once she’s finished the task” is in all of my skills files in every project).
I use Haiku or Sonnet (Claude’s “lighter” models) because my skills files are so detailed. This uses less energy than Opus, or lengthy conversations.
I’m turning it over to you. What are your honest, brutal thoughts on AI? If you use it, how? If you refuse to use it, I’d love to know why. If you could outsource anything to AI, what would it be?




