On Ramit Sethi, the writer behind “I Will Teach You To Be Rich” (How’s that for a name?)
“[…] Willpower is a depleting resource. We should focus on setting up systems, automating behaviors we want to happen.”
Sethi’s self-help shtick involves getting twenty- and thirtysomethings—of whom he has 500,000 online followers—to put as much of their financial life on autopilot as possible, setting up automatic deductions for 401(k) plans, student-loan repayments, credit card bills. He even came up with a way to force himself to go to the gym.
Instead of cleaning up my physical space, I’ve been cleaning up my behavior a little bit (example: I now love Mint), proceduralizing things. Paperwork cripples me, so I set up as much automation as possible.
In this post, I’m try to sync the vocabularies and ideas of a couple of threads on this idea of behavioral proceduralization.
—
Herbert Simon’s Sciences of the Artificial is part of my list of canon books. I first encountered Simon’s thought in high school, and was lucky enough to take a seminar course on Human Problem Solving at Carnegie Mellon, where Simon did much of his work. Simon saw humans as information processing systems. We have sensory organs (input) and motor organs (output), working memory (short term), long term memory, and the capability to employ external memory. Short term memory is limited in space but is easy to retrieve from. Long term memory was thought to have effectively unlimited storage (with the hazard of decay over time), and with longer retrieval time. External memory refers to information codified into artifacts: interfaces or written notes, for instance. Information in external memory, including functional/rhetorical affordances, reduce that much retrieval and processing effort for the user.
In Simon’s model (co-created with Alan Newell), a problem’s presentation is called the task environment. (For an example “problem” it might be easiest to imagine a mathematical or spatial task, like the Tower of Hanoi puzzle.) The problem solver’s mental model of the task environment is called the problem space. A problem space is all of the possible states of a situation between the initial state and the goal state. Operators are potential actions that would bring about a new state, which the solver will use to navigate the problem space (or/and the task environment itself, if it’s inexpensive enough to do so). Naturally, even slightly complicated problems warrant the use of heuristics, or cognitive shortcuts employed instead of algorithmically/deliberatively reasoning through the problem, in order to reduce processing time and energy during the search for a path between initial and goal states.
–
Linking to Pragmatism: The problem solver’s mental model of a problem does not need to be a perfect description of the task environment in order to be effective. What’s more, operations between states don’t need to even be well understood in order to be functionally useful. Heuristics necessarily eliminate alternatives without necessarily explaining them, in order to simplify the search process. Hill Climbing is one popular heuristic. Means-Ends Analysis is a more complex heuristic. Most of the time, Simon contends, decision-making is done through descriptive modeling instead of normative modeling: satisficing or “meeting the standards” are easier points to declare the search a success than searching through the problem space in pursuit of the optimal solution. Two isomorphic tasks with different representations may take very different amounts of time depending on their memory requirements and how easy it is to identify meaningful operators (pdf). Operators can be developed through direct instruction, through trial-and-error, and by analogy to another task. Design concepts such as natural mapping can demonstrate the utility of smart task environments.
Partially because of my exposure of these two ideas at the same time, I’m sure, I’ve long seen philosophical pragmatism in Simon’s conception. It can be read as a falliblistic, metaphysically pluralistic, instrumentalist view of how people contend with ‘things’ and ‘actions’. One’s understanding of the world doesn’t need to match, but one’s folk understanding needs to be usable enough to act upon in order to effectively make decisions.
I also tend to couple Simon’s thought with Douglas Hofstadter’s arguments about analogy (or, for similar reasons, Dennett’s argument about “cranes and skyhooks” to explain/predict phenomena). Creativity and abductive processes can be accounted for even in Simon’s more mechanistic metaphors. I tried to sketch out this idea before because when I talk about these ideas the objections tend to be that these views are highly reductive. While I don’t die to defend any of these people’s ideas, I do think ‘greedy reductionism’ is not the strongest indictment.
–
Linking to Tempo: I first read Tempo maybe 2ish years ago, in college. I appreciated it as a different take on decision-making than my academic exposure, one that treated decision-making as continuous instead of discrete, and as an exploration angled in a more phenomenological way than the more constrained, more detached ways I was accustomed to. There are two touch-points between Simon’s thought and concepts in Tempo that I usually use interchangeably.
The Clockless Clock: External Mental Models are environments constructed to adhere to a mental model (relating to Simon’s External Memory, information available outside of internal memory to act with/upon, plus the affordance talk I’m always bringing up). These External Mental Models are created by codification (imagining a representation of a concept that can be physically created in theory) and embedding (constructing an artifact out of the codified model). A field-flow complex is a description of the interplay between fields (these physical arrangements of external mental models) and flows (observable behavior resulting from human interaction within a field). Venkat’s presentation of these concepts are not related to discrete decisions like Simon’s presentation is- instead, Tempo is more about flow and the subjective experience: emotion, energy, inclination, rhythm.
Universal Tactics [Updated by “Annealing the Tactical Pattern Stack“]: Venkat identifies four decision patterns: Reactive, Opportunistic, Deliberative, and Procedural.
Reactive patterns have the advantage of quick processing. Reactions are decided from suggestions in the immediate environment, almost like an enthymeme. Working memory and cached thoughts are the main drivers of reactive patterns, which are generally characterized “System 1” thinking. Reactive patterns are often about adjusting energy and social momentum quickly.
Deliberative patterns come from prediction and inference, Deliberation requires internal representation of the environment and expectations about how to change it. Long Term Memory and “System 2 Thinking” is employed in constructing a deliberative decision.
Opportunistic patterns are improvisational. More than one environment or script is usually involved, and the decider must navigate both or transition from one script/task to another in order to achieve some new goal presented by local conditions. If reactive patterns are enthymematic, deliberative patterns are inductive and opportunism is abductive. Opportunism requires more processing than reactive patterns, but the possible states in a problem space are not as clear to mind as in a successful deliberative pattern. It’s a more situational than planned decision pattern.
Procedural patterns are not driven consciously, but are instead the result of either completely learned behaviors (and thus requires almost no attention) or completely externalized behaviors to the environment (and thus requires almost no attention). Quoth tempo’s blog: “This layer contains most of the data informing behavior, and is embodied in ways that represent optimization for efficient execution, not comprehension, appreciation or awareness. It is what AI people call ‘frame information.'” Field-Flow complexes create procedural decision patterns.
In the linked blog post, Venkat offers a “stack” with procedural patterns at the bottom, interfacing with the “world”, deliberative patterns on top of procedural, and both opportunistic and reactive patterns on top of deterministic, combined into an “integrative ritual”.
The stack as I’ve drawn it is similar to the natural architecture of the brain (fast, more unconscious inner feedback loops enveloped by slower more conscious, and more open loops, with feedforward). It is also similar to what is known as the “subsumption” architecture, a common high-level way to organize robots. It is also the usual way to construct abstraction hierarchies in computers.
Why is this stack so common?
It is because you have to think about two aspects of information in a behavior. The cost of handling it, and the predictability of the information. High cost, high predictability information gravitates (or sinks) to procedural layers. Low predictability behaviors, whether or not it is costly, necessarily has to live near the top of the stack. A good system has very few things being handled by the top and a lot being handled by the bottom. If things are flipped, you are living a very behaviorally expensive life.
There is really no low-cost way to handle truly unpredictable things. But you can evolve to anticipate more and more (so you can make things more predictable as you learn). Your meta-cognition can become more efficient as well, so that the process of setting up a stack and “turning it on” in a new environment takes less time, each time (this is adaptability basically).
The procedural layer is something most of us aren’t good at hacking. Some people become extreme creatures of habit, and eliminate or block out environmental information flows that might destabilize the procedural layer.
Others are good at hacking the layer, but get so addicted to it that they never stabilize a procedural pattern long enough to use it effectively to sustain activity at the higher layers (higher is “more distant from the environment” in the diagram).
In OODA loop terms, the idea of getting “inside the tempo” of an opponent generally relies on hacking the procedural layer.
It’s the procedural stack that maintains order when institutions fail, that generates habits, that reinforces ‘invisible’ cultural privileges and imbues social meaning on top of denotative meaning. It’s a layer that is at this point is heavily synthetic. Arguably, codification and embedding could become less and less metaphorical as terms for how we construct field-flow complexes in the future.
Out of time/energy. More later.