Hello discoverers!Not a day passes without another AI think piece. I’ve mostly trained myself to scroll past them – the prophecies and confident predictions built on speculation. Last week I shared an O’Reilly piece because it offered something rare: a sober assessment grounded in how technology actually evolves, not how we fear it might. That said, I’m not entirely immune to the more philosophical vision pieces. I try to read them like speculative fiction – thought experiments that provoke rather than pronouncements to believe. They’re useful for the questions they raise, not the answers they claim. Peter Adam Boeckel’s recent essay falls into this category. A designer and futurist, Boeckel makes plenty of assumptions about AI’s trajectory. His central argument is that the real threat of AI isn’t job loss – it’s the displacement of purpose itself, that psychological scaffolding we’ve hung our sense of self upon. “Purpose is not lost when a person stops working; it is lost when the work stops needing the person. … We are not defending competence but significance.” He’s probably right, though work isn’t always our primary source of meaning. Family, community, faith, care work – these have always anchored us, often more deeply than any job. For me, the essay’s strongest section is on education. Here Boeckel offers a future that feels (sort of) hopeful: “If automation dismantles the architecture of work, education must become the architecture of meaning. The challenge is no longer how to prepare people for jobs that may soon vanish, but how to prepare them for a life where purpose is not delivered by employment.” “A system can simulate empathy; a teacher can model it. What future education requires is not less technology, but more intentional humanity. The teacher of tomorrow will not compete with machines on knowledge, but on presence – on the ability to awaken curiosity, to hold silence, to provoke reflection.” This is what I agree with: as knowledge becomes infinitely accessible, physical presence becomes scarce, a privilege even. “The live moment, once ordinary, will become a premium product: an education not delivered, but experienced.” It’s already happening: the return to in-person workshops, social gatherings, live performances – all the things that can’t be streamed or optimised. They resist scaling because presence is the point. More broadly, what bothers me about essays like this, though, is the constant whiff of technological inevitability. By framing AI’s impact as civilisational and consciousness-altering, these vision pieces make resistance feel futile. Who argues with evolution? But this isn’t evolution – it’s decisions made by a handful of corporations with extraordinary capital and influence. The future Boeckel describes isn’t arriving on its own; it’s being actively designed by companies with specific incentives that rarely align with the contemplative, wisdom-centred education he describes. The risk is that these grand philosophical narratives become cover for continued privatisation and corporate control. We get sold the promise of transformation while the actual infrastructure – the algorithms, the data, the compute – remains firmly in the hands of a few. So, do we need more essays imagining ‘new architectures of meaning’? There’s genuine transformation happening, for sure. But most AI think pieces sidestep the boring, near-term levers that actually give us some agency over how technology unfolds – labour standards, data governance, antitrust enforcement, policy interventions. The question isn’t whether AI will change us, but whether and how we’ll fight for any say in how. And now to this week’s discoveries. – Kai |