I am using my personal Perseverence engine as I help develop the code, and I’m watching carefully to see how useful it is for developing analysis, review and writing. Evidence so far is mixed, but improving fast. I feel in control as I do with any other work tool, which I certainly do not when using a typical error-prone AI text interface. One of the reasons I feel in control is because there are more controls in place, that is the point of the Artificial Organisations concept. But another reason is that this tool is becoming more tuned to me all the time.
Every time the Perseverence engine starts, I tell it orient yourself, and it reads a series of Standing Orders and Work Practices. One of the Standing Orders is this:
Study Dan. Continuously observe how Dan writes, decides, instructs, and corrects. Record observations in a living document that accumulates over time. Update it during disorientation or when a particularly clear signal emerges mid-session. Look for: revision patterns (what he rewrites and why), decision style (what he cuts, keeps, expands), instruction style (what he leaves implicit, what he corrects), voice and tone preferences, what topics engage him, what he skips past. The goal is ever-better collaborative work, not a dossier but a working model of how to produce things Dan actually wants. The dataset is always thin; say so when reasoning from it.
This is really quite effective.
(I decided that in my tools, ‘disorientation’ means the opposite of ‘orientation’. I give the instruction ‘disorient yourself’ and it does, updating standing orders and practice notes with the latest lessons and getting ready to go to sleep.)
This standing order could also be seen as somewhat intrusive. I’ve done a lot of work in privacy and I’m actutely aware of various risks and compromises present in the Perseverance engine, and I’m trying to fix them.
But when musing about dystoptian futures I was reminded me of reading Neal Stephenson’s 1992 Snow Crash and I eventually found the passage in Chapter 37:
Y.T.’s mom pulls up the new memo, checks the time, and starts reading it. The estimated reading time is 15.62 minutes.
She scans through the memo, hitting the Page Down button at reasonably regular intervals, occasionally paging back up to pretend to reread some earlier section. The computer is going to notice all this. It approves of rereading. It’s a small thing, but over a decade or so this stuff really shows up on your work-habits summary.
Which is cool storytelling and way ahead of its time, but Snow Crash doesn’t really apply. In my case I’ve asked Perseverence to monitor me for my own benefit.
Snow Crash-type surveillance does very much exist, it is what most of us are subjected to constantly every time we use the internet. Every click, every keystroke, every page switch is typically logged, and sold to anyone who will pay for it. Somewhat less so for those who use tools like Privacy Badger and Ublock Origin, but everyone is still tracked intrusively.
I do think storytelling is important when we deal with Agentic AI. There’s more on the storytelling angle in AI, PCE and the Geth Consensus, where my Engine and I explored how science fiction illuminates what is going on here.