Commoncog This Week
This week's Commoncog piece is free; next week's piece will be members-only.
How Experts Sensemake — Every few years it seems I stumble into and then publish an important, load-bearing piece that completely changes the way I think about business, cognition, or investing. To me, previous pieces that belong to this category include Lia Dibello's mental model of business expertise (which changed Commoncog's direction shortly after and — years later — was taught in the legendary Security Analysis class in Columbia Business School), Becoming Data Driven, From First Principles, which in many ways was the culmination of the entire Becoming Data Driven in Business series, and then the piece about Cognitive Flexibility Theory (which led to the creation of Commoncog's case library, and led to our repositioning).
I believe this week's essay is another one of those.
This is Part 2 of a short series on sensemaking — specifically it is framed as about sensemaking AI. In Part 1 ('How to Make Sense of AI') I introduced a simple method to sensemake AI but warned that it was not enough. That essay did quite well: it was shared rather widely; one friend I know printed it out on paper and gifted copies to others (thanks, Keith!)
Part 2 will not be as viral, I think, because the message here is more complex. In order to talk about how the method I presented in Part 1 is not enough, we need to talk about the best theory of sensemaking we currently have. We will examine how sensemaking actually works, examine what experts do differently from novices, and then we can talk about how to improve.
(Also unlike that first essay, investors will benefit from reading this piece).
This research is 19 years old. The name of the theory is The Data-Frame Theory of Sensemaking, originally published in 2007. It was originally funded by the US Military. If you take a step back, you can easily imagine why the US military would be interested in a theory of sensemaking: a huge part of intelligence gathering and warfighting is making sense of ambiguous information, under conditions of extreme uncertainty. And so, quoting from this week's essay:
... what is true for sensemaking in war is true also for sensemaking in business and in investing. The sensemaking processes that skilled warfighters use in battle is the same one that a business leader uses when deciding what to do when faced with a new competitive threat. It is the same process that an investor uses when coming up with an investment thesis for a specific company (or when the investor decides that a previous thesis has been invalidated.) And it is the same process technology leaders and engineering managers must use when faced with a revolutionary new technology.
It's actually even more load-bearing than that. Sensemaking turns out to be the bit of tacit knowledge that matters. It underpins expertise. It explains how insight generation occurs.
This is long piece, clocking in at 11k words. I've tried to edit it down, but the main reason it's taken up so many words is that I have had to lay out the implications in a way that someone with no experience with the psych literature will understand. (The original Data-Frame paper makes assertions which are the equivalent of academic bombshells, but unless you are familiar with the information processing perspective on cognition vs the ecological perspective on cognition, the references will just go whoosh over your head. I'll give you a single example: if you take the Data-Frame theory seriously, you will come to the conclusion that confirmation bias is fake).
Needless to say, I think this piece is important, and it will likely change the way you think about your own cognition forever.
I think you'll find it useful.
Note: members may download a cleaned-up version (in both PDF and ePub formats) of the original Data-Frame paper here. Members may also leave comments at the bottom of the essay. Watch the forum for other updates regarding the Data-Frame theory — it's likely we'll want to use it when sensemaking AI, or when talking about the Case Library.
💡 The Commoncog Membership Program is like an ongoing MBA, for a fraction of the price. Get full access to members-only articles, a rich and growing case library, plus an exclusive, members-only forum.
|
Member Discussions
The Commoncog members-only forum is a private place for sensemaking on business and markets.
Here are a couple of members-only discussions I'd like to draw attention to:
- Jack Dorsey and Block layoffs and restructuring — A member argues that AI and better coding tools have given executives confidence to run leaner teams, which matters more than any claimed AI-driven restructuring at Block. More members push back on Dorsey as a believable source.
- SpaceX IPO and "Nasdaq's Shame" — A member works through the math on whether NASDAQ's proposed rule changes would force passive index funds to buy overpriced SpaceX stock at IPO.
- Is owning your own AI model a 'not not'?
- AI got the blame for the Iran school bombing. The truth is far more worrying — choice quote from a member post: "[...] was cancelled last year by Hegseth because he said he wanted 'killer AI not safe AI'."
- Commoncog-y Programming Workshop Ideas — does anyone have recommendations for how to spice up a programming workshop?
- OpenAI pivoting to business — members discuss the OpenAI pivot, and draw comparisons to lay-person and investor impressions of Claude vs ChatGPT.
- The 'X' is nothing, the 'X-ing' is everything — I push a member to publish their thesis that when you automate knowledge work away, you also lose the org learning that the knowledge work might give you.
- In the David Bessis Q&A thread, members debate Simone Weil's criticism of Descartes' shift from geometry to algebra, with one member arguing this represents a move from embodied understanding to abstract calculation that conflicts with Bessis's endorsement of Descartes's approach.
- How Steve Jobs Learnt Process Control / Quality — some discussion on a good FT piece by Patrick McGee, of Apple in China fame.
- Startup Punditry's 25 Years of Failure — We discuss Jerry Neumann's latest essay for Colossus.
- Bad AI Field Reports — "A recent paper by some researchers at Stanford found an effect which is making me question how far I can rely on benchmarks for my own personal calibration and sensemaking."
- In the AI Field Reports thread, a member says that the thread has become an incredible source for improving their workflows, and shares that he's created a /scan command that reads posts and suggest improvements to existing dev practices. Also Anthropic has decided to change pricing, how an AI security consulting firm adopted AI, Shopify CEO's autoresearch pull request will never be merged nor closed, a member who is an exec coach shares a think / skill they created for their own work, and a bunch of field reports about teams producing AI code with minimal human code review.
- Review of The Irrational Decision by Ben Recht.
Note that you'll have to be logged in as a member to view many of these threads. You may login here.
|
Elsewhere On The Web
The Odd Little book All Founders Should Read on Selling Their Company — Absolutely brilliant. Choice quote:
What should have happened instead? Roizen’s answer is what he calls the Partner Big Idea (PBI). The mechanics of building a PBI are more involved than I’ll go into here — read the book — but the core principle is this: the deal has to become their idea, not yours.
The investor presentation was the original sin. It accidentally signaled that Alpha was for sale, which put GiantCo in evaluation mode rather than strategy-building mode. What Alpha needed wasn’t a buyer to evaluate it. It needed a champion within GiantCo — ideally that GM who missed the meeting — to develop a strategic vision that Alpha was necessary to execute. Not “Alpha is an interesting acquisition target” but “here’s a thing that we need that we can’t build without Alpha.”
Building that requires a totally different set of behaviors. It means getting to the right person quickly — the GM or product leader whose roadmap would actually change — and not spending lots of time with Corpdev. It means asking more questions than you answer. It means leaving the story incomplete enough that the other side has room to build it with you. Incompleteness, in this context, is a feature. It gives the champion something to build and own.
If you want to sell your company, don't pitch it. Seduction works, not sales.
Zero-Degree-of-Freedom LLM Coding using Executable Oracles — This is linked in this week's essay above, but it's mostly notable in the context of a specific frame: "it's possible to produce software with minimal-to-no human code review". This article is a data point as to how such teams might accomplish this, and what they're doing with their harness engineering.
Vulnerability Research is Cooked — Thomas Ptacek with a bit of a field report (and a bit of editorialising; apply the appropriate filters). The only reason I'm linking to this is because a friend in one of the Big Tech security teams is reporting that it's accurate and already happening. Also related to the Anthropic Mythos announcement.
|
Writing expository essays about complex ideas take more out of me than usual, now that I'm a dad. I apologise for not being able to publish this last week.
I hope this email finds you safe, sane and healthy.
I'll see you next week.
Warmly,
Cedric
|
💡 The Commoncog Membership Program is like an ongoing MBA, for a fraction of the price. Get full access to members-only articles, a rich and growing case library, plus an exclusive, members-only forum.
|