Me on Liv Beoree's WinWin Podcast
I recently sat down with
and Igor Kurganov on their WinWin podcast for what might be the best podcast conversation I’ve ever had.The episode on YouTube. Also available on all the podcast apps.
We covered an enormous amount of ground. From the psychology of slot machines to machine consciousness, from my social media sabbaticals to multi-dimensional economics. But it all connects to a single through-line: understanding why our current civilizational operating system (what many of us call Game A) is fundamentally self-terminating, and what we might do about it.
The conversation starts with technology and addiction. I told the story of my six-month experiment giving up my smartphone in favor of a dumb phone—an experiment that ultimately failed when I needed Uber on a trip. But I’ve been more successful with my annual social media sabbaticals: every year from July 1st to January 2nd, I’m completely off Facebook and Twitter. I’ve maintained this discipline for six years now. The most liberating aspect isn’t the time saved (though 30-45 minutes daily adds up)—it’s freeing my background brain from constantly processing how to win internet flame wars.
We also dove deep into why these technologies are so addictive: they use the exact same intermittent reinforcement mechanisms that slot machines use. A buddy of mine who became COO of the third-biggest slot machine company told me they employed 200 PhD psychologists, most from “the rat torturing division of psychology.” When I searched Facebook’s own internal job board in 2019, 700 open positions included the word “psychology.” This isn’t accidental—it’s engineered addiction.
This leads directly into the Game A analysis. Our current system is driven by what I call the money-on-money return loop, a single-dimensional signal that coordinates behavior across the entire economy. Starting around 1925 with Edward Bernays (Freud’s son-in-law) applying psychology to persuasion, markets ever more rapidly stopped simply serving customer needs and began manufacturing them. Combined with financialized capitalism’s requirement for exponential growth, we’re playing an infinite game on a finite planet.
The math is simple: Game A arguably started around 1700 when the human population was 650 million consuming about one-tenth per capita of today’s resources. The human species in 1700 had less than 1% of our current planetary impact. Exponential growth for 325 years means we’re now hitting the wall. The system is self-terminating not because of any particular malice, but because of exponential trajectories in a finite space.
Game B is a proposed alternative. I provide a very rough sketch of a different social operating system. Key concepts include: membranes (semi-permeable boundaries that allow curated interaction with Game A while building new institutions), coherent pluralism (agreement on core principles with maximum subsidiarity), multi-dimensional economics (imagine needing both dollars AND “bluefish coins” to buy bluefish, creating automatic closure on ecological limits), and the coupled spiral of human capacity and institutional quality.
Most hippie communes failed because they didn’t think like membranes—they didn’t seriously consider what engine would power their economy or how they’d interface with Game A to obtain what they needed.
Game B has to work, not just feel good.
We also spent substantial time talking about machine consciousness, a topic I’m deeply involved with as the chairman of the California Institute for Machine Consciousness. The key insight: machine consciousness will be analogous to human consciousness, not identical—like industrial digesters versus human digestion.
This matters for two reasons: it gives us substrates to scientifically study consciousness (you can’t do truly fine-grained experiments or instrumentation on living brains), and if successful it should help “deblather” the discourse around consciousness.
Among other things, people constantly confuse intelligence with consciousness—a bacteria is intelligent, the Amazon rainforest is intelligent, but neither has the specific architecture that produces a unified conscious scene or self-sense of being something.
Finally, we discussed what I call the trillion-dollar opportunity: personal information agents. I’ve already built a small prototype that processes 117 Substacks daily using AI trained on my podcast transcripts, ranking articles by my likely interest and providing summaries at multiple levels. Imagine scaling this: agents that filter incoming information based on your preferences (including a tunable serendipity setting), facilitate outbound communication, and most importantly, engage in negotiated sense-making with other agents. This could be how we save collective sense-making, which is approaching total collapse.
Of course, the Ruttian version has one non-negotiable feature: no fucking advertising. Or at least, you set your own price—want to send me an ad? That’ll be $10.
The whole conversation embodies what I’ve been working toward: rigorous analysis of civilizational-scale problems combined with practical thinking about alternative architectures. We don’t need utopian fantasies—we need workable systems that can be built incrementally, tested, and evolved.


Glad to discover you through this podcast, great conversation
This was really excellent, thank you.