Back in May, I wrote an article about AI journaling. The idea (which I had stolen from some YouTuber) was that you write your journal entries as a brain dump—just lists of stuff—into an LLM, and then ask the LLM to do it’s thing.
. . . ask the LLM to organize those lists: Give me a list of things to do today. Give me a list of blind spots I haven’t been thinking of. Suggest a plan of action for addressing my issues. Tell me if there’s any easy way to solve multiple problems with a single action.
Now, I think it’s very unlikely that an LLM is going to come up with anything genuinely insightful in response to these prompts. But here’s the thing: Your journal isn’t going to either. The value of journaling is that you’re regularly thinking about this stuff, and you’re giving yourself a chance to deal with your stresses in a compartmented way that makes them less likely to spill over into areas of your life where they’re more likely to be harmful.
I still think that’s all true, and I still think an LLM might be a useful journaling tool. My main concern had to do with privacy. I didn’t want to provide some corporation’s LLM with all my hopes, dreams, fears, and best ideas, and hope that none of that data would be misused. I mean, bad enough if it was just subsumed into the LLMs innards and used as a tiny bit of new training data. Much worse if it was used to profile me, so that the AI firm could use my ramblings about my cares as an entry way into selling me crap. (And you know that selling you crap is going to be phase two of LLM deployment. Phase three is going to be convincing you to advocate and vote for the AI firm’s preferred political positions.)
Anyway, I figured it wouldn’t be long before local LLMs (where I’d actually be in control of where the data went) would be good enough to do this stuff, and I was willing to wait.
But I didn’t even have to wait that long! A couple of days ago, I saw an article in Ars Technica describing how Moxie Marlinspike of Signal fame had jumped out ahead with a really practical tool: confer.to. It’s a privacy-first AI tool built so that your conversation with the LLM is end-to-end encrypted in a way that keeps your conversation genuinely private.
I’ve started using it for journaling exactly as I described. Because of the way the privacy is inherent to Confer, I can’t actually keep my journal within Confer—all the content is lost when I end the session. So, I’m keeping the journal entries in Obsidian, and then copying each entry into Confer when I’m ready to get its take on what I’ve written.
I wanted some sort of graphic for the post, and asked Confer to suggest something. It came up with 5 ideas, including this one, which (bonus) actually illustrates my process:

Anyway, I’ve already written three journal entries that I otherwise wouldn’t have, and gotten some mildly entertaining commentary on them—some of which may rise to the level of useful. We’ll see.
(Asked to comment on a previous draft of this post, Confer.to mentioned the “Give me a list of blind spots I haven’t been thinking of,” prompt above, and said, “But LLMs can’t actually know your blind spots — they can only reflect patterns in what you’ve said.” Which I know. And so, of course, once I started using an actual AI tool instead of just an imagined one, that ended up not being something I asked for.)
If I keep doing this (and I think I will), I’ll follow up with more stories from the AI-enhanced journaling trenches.