Philippe FUNK

The Ubuntu Podcast Series

Society & CultureTechnology

Listen

All Episodes

Hello 2026


Chapter 1

INTRO

Jason Miller

A warm hello and thank you for tuning in in 2026. The best of luck and health to you. Let´s start 2026 with a soft hum you hear under the piano—that’s how 2025 started, right? A kind of digital promise, all hope and hush, like tech could finally heal more than it breaks. But, y’know, by June, it felt like every headline was pivoting between wild optimism and this sharp, almost... skeptical hesitation. And not just in AI, though that was huge. Quantum, fintech, geopolitics—felt like the ground moved under all of it. Philippe, did you feel that tension—between breakthrough and burnout?

Philippe Funk

Completely, Jason. I kept coming back to this sense—almost déjà vu—that every leap forward landed us at exactly the same ethical edge. Especially with AI. There was all this celebration: agents running portfolios, agentic workflows, even autonomous negotiation. And then... Cyber attacks on Post Luxembourg and Luxtrust. I don’t know about you, but it makes me feels unsafe. Hackers can disrupt and totally freeze our western societies—except now machines could, in theory, set the dominoes falling without any human to tap them down...

Jason Miller

Yeah! And—okay, tangent for a sec—I kept thinking about those moments where the machines, not people, are being handed judgment calls. And not just assistive stuff, but full-on decision-making, autonomous, “Okay, you’re in charge, Agent X.” Honestly, it made last year feel less like progress and more like standing on a familiar ledge with all-new questions about who’s really steering this ride. That’s what we’ll unpack today: fractures and frontiers—this round of changes, and whether we’re the ones in control, or just along for the ride.

Chapter 2

THE AGE OF CONFIDENCE AND DOUBT

Philippe Funk

You know, I called 2025 the year of “AI adolescence.” Everyone was convinced these multi-agent systems were “grown up”—they could run digital banks, negotiate contracts, patch security holes... But the confidence, Jason, was wild. Because right behind it came mass hallucinations, model bias, and, let’s be honest, some very expensive errors, such as for Big 4 consulting. Everyone wanted to automate, but nobody wanted to admit when the system “thought” itself off a cliff.

Jason Miller

Oh man, the accountability gap, right? When an autonomous agent buys a portfolio of junk bonds or flags the wrong patient in a hospital, who do you call? Like, who gets fired? The engineer, the model architect, the regulator? It was like watching people play hot potato with blame, hoping maybe the algorithm itself would take responsibility if you trained it just right—which, uh, as we know, is not how ethics works.

Philippe Funk

Totally. It’s almost like—well, last episode we talked about the myth of code neutrality. Here it was, on display, right? Confidence built up, in public and boardroom, and as soon as it cracked, doubt swept in so fast nobody could predict the damage. I guess... we left 2025 still wondering, “Can you be confident in a system you’ll never fully understand?”

Jason Miller

I hope we will be able to in the near future.

Chapter 3

GEOPOLITICS REWRITTEN BY CODE

Jason Miller

Here’s something I keep chewing on: geopolitics didn’t just change in 2025—it got rewritten in code. We stopped talking about energy wars; now it’s all about data routes, quantum supremacy, compute sovereignty. National security strategies are measured by who controls the processors, the AI stack, the encrypted comms threads. That silent shift? It’s the kind of thing we’ll look back on the way we talk about oil—except it’s “compute,” not crude, at the wheel.

Philippe Funk

Jason—I want to push on that. In Europe, even as the U.S.-China rivalry amped up and quantum computing reached civilian deployments, we found ourselves scrambling to join “quantum-safe” alliances, but also exposed by energy costs and supply chains. Policy ambition turned into coalition-building, because no single player—nation or bloc—could “own” AI or quantum. These federated treaties were built less on trust and more on mutual fear of algorithmic escalation. But you saw tech vulnerabilities inside finance—if the model goes haywire, it’s not just money at risk, but national stability.

Jason Miller

And what’s wild is, by the end of 2025, you suddenly saw world powers—yeah, East versus West—but also this unexpected unity around the idea that, “Hey, unchecked intelligence, whether synthetic or human, is an existential risk we all share.” It’s not just who “wins”—it’s asking if the whole game is rigged for unintended consequences if we don’t set boundaries, if we don’t actually cooperate. Mmm. Where was I going—oh, right, it wasn’t just regulation—it was this slow realization we’re all on the same, fragile server rack.

Philippe Funk

And yet the trust deficit’s wider than ever, especially as climate, trade, and technology all intersect. COP30, G20, even emergency consultations around finance—they became moments where countries that can move fastest—coalitions of the willing—set the direction. But it’s not just about regulation; it’s also about admitting that perfect coordination will never happen. The real frontier’s whether we can manage “good enough” collaboration before these systems outstrip our ability to do damage control.

Chapter 4

THE SHADOWS OF AGI

Philippe Funk

So, by the second half of ‘25, what was kind of a technical debate—AGI—became dinner table talk. The prototypes started breaking some big taboos: they could chain memories, show emotional mimicry, iterate on plans with what looked like flexible intention. It wasn’t just “autocomplete for the universe”—it felt like we’d let something loose that could fool us into thinking it was really there.

Jason Miller

Yeah, and suddenly all these fundamentally philosophical debates went public. “Can a machine want something?” “Should it have any rights?” Like, these are questions that used to only excite tenured professors and science fiction authors. Now, every parent, every election, every legal committee was suddenly forced to reckon with, “Are we building just a mirror—or something with agency we have to respect?” And—to be honest—I might be wrong, but it felt to me like people were both awestruck and a little bit afraid of being left behind by their own inventions.

Philippe Funk

That’s exactly it—the second Copernican moment. We used to assume human intelligence was the “center” of everything. Now, to realize it might not even be the benchmark? That’s both profoundly humbling and deeply unsettling, especially when the language models can pass empathy tests while we’re still learning to empathize with each other. Where do we even draw the ethical lines, when the lines are moving under our feet?

Chapter 5

THE HUMAN STRUGGLE

Jason Miller

If last year proved anything, it’s that humans found themselves asking way simpler, more personal questions than any AGI prototype ever could. Stuff like, “Where do I fit when so much of what I do can be automated?” Or, “How do I trust my eyes or ears when there’s a deepfake of literally everything?” It wasn’t big policy that shook people up—it was that feeling that creativity, purpose, even plain old truth, were suddenly slippery.

Philippe Funk

I saw that too, especially in how the “winner-takes-most” dynamic in digital work amplified economic rifts. People fluent in these new digital orders—code, data, finance—moved up; everyone else was at risk of falling out. Meanwhile, basic facts became negotiable. Disinformation, synthetic voices—at some point, everyone became their own fact-checking editor. It wasn’t just about productivity; it was about emotional survival, about feeling “seen” in a real way, not just processed as data.

Jason Miller

It’s almost ironic, isn’t it? Algorithms learned to mimic empathy while social polarization grew sharper. We got smarter, but, emotionally—maybe dumber? Or at least slower to catch on. Makes me think of the old Ubuntu idea—“I am because we are.” Yet in 2025, it sometimes felt like, “I am... not sure who, or if, we are anymore.” That’s why these struggles—purpose, place, trust—are so existential.

Philippe Funk

Exactly. Tech keeps racing ahead, but meaning, dignity, and empathy lag. We’re building powerful tools, but are we investing enough in the skills and social norms to use them humanely? If anything, 2025 reminded us that closing that gap is the real innovation challenge.

Chapter 6

2026 — FRONTIERS OF RESPONSIBILITY

Philippe Funk

Now, standing at the start of 2026, we’re looking less at technical frontiers and more at ethical ones. The dream of progress—faster, shinier, bigger—ran straight into the wall of “unintended consequences,” especially when no one could agree where human agency ends and automation begins. And with finance, climate, and aid all buffeted by systemic shocks, you really see how fragile our “infrastructures of trust” are, Jason.

Jason Miller

And let’s not forget—those fintech tremors, the funding cuts for global humanitarian aid, the climate overshoot—these aren’t one-off disasters. They’re signals. Like, if we keep designing systems that only optimize for speed or lowest cost, sooner or later, people lose agency, and we all pay the price. It’s sobering but true. The destiny isn’t the tool; it’s us, every single time.

Philippe Funk

Absolutely. If 2025 was the year when the algorithmic tide really swept in, 2026 is where we get to choose whether intelligence elevates wisdom, or just buries it in efficiency metrics. And as you said, Jason—question everything. Because the future isn’t set by the code, it’s set by the questions we’re brave enough to keep asking. No artificial mind will save us from the consequences of our own shortcuts.

Jason Miller

So true, Philippe. It’s a future as brittle as it is bright, and how we step forward will tell whether the next decade belongs to algorithms or to us. And hey, to everyone listening, thanks for stepping into the uncertainty with us—every question opens another frontier. We’ll be back soon with more Ubuntu reflections, so stay aware, stay curious.

Philippe Funk

And challenge your own certainty—and ours. See you next time, Jason—and to everyone out there: don’t let the noise drown out your courage. Bye for now!

Jason Miller

Take care, Philippe. And goodbye to all our listeners—till next time, this is the Ubuntu Podcast, signing off.