• Sarah Wilson

    Welcome to Wild, Audrey Tang.

  • Audrey Tang

    Hello. Good local time, everyone. Very happy to be here.

  • Sarah Wilson

    Where are you right now? I can see a sort of a hint of a map of Taiwan behind you. Are you in Taiwan at the moment?

  • Audrey Tang

    Yes. I am in Taiwan. I just returned to Taiwan after going through 28 countries last year. So finally, a bit of a break for the Lunar New Year with family.

  • Sarah Wilson

    Oh, nice. Very good timing. Look, I don't normally go into people's bios, into their life stories, because generally I'm impatient to get straight to their idea. I want to talk about the idea, the big idea that they've been working on. But would you mind if we actually do a bit of a speed overview of your life story? Because I think it gives some wonderful context to your work and what we're about to talk about today. I was thinking I could perhaps read out some key points I find really fascinating about your life story, because there's so much to talk about. And you can perhaps correct me if I'm wrong or add some extra context if you think it's required. Alright. So you were born into martial law in Taiwan. Is that correct?

  • Audrey Tang

    Yes. For almost four decades, Taiwan had martial law, one of the longest in human history.

  • Sarah Wilson

    Okay. Now, you were told at the age of four that you had a fifty-fifty chance of dying at any time. You had, I think, a hole in your heart that required an operation, but you couldn't have it for about eight years until you were a teenager. So you had to live in this perilous state of not knowing if you would live or die.

  • Audrey Tang

    Yeah. So I publish before I perish, every night.

  • Sarah Wilson

    Right. Okay. What do you mean by publish?

  • Audrey Tang

    So I document what I learned of the day. Initially on cassette tapes -- maybe you still remember those -- and then floppy disks, and finally the early Internet. So even if I don't wake up, that's fine because people get to work with the materials that I have. And vulnerability, like publishing in an imperfect form, is actually how I meet people online. They correct me. And I make friends this way because perfection is boring. People just like and they go somewhere else. But if it's imperfect, everybody's like, "actually, that's not how things work," and I get to make many friends.

  • Sarah Wilson

    Right. So you were doing that from a really -- I mean, you grew up with the Internet. But what I found really interesting, and this is sort of the next bio fact that I found wonderful -- I mean, you had a super high IQ, and by six were studying advanced mathematics. And at eight, you were writing code for video games, but you were doing it with pencil and paper because you didn't actually own a computer yet.

  • Audrey Tang

    That's correct. And that describes the experience of many very young people today. They just doodle something, sketch something, and then they see entire applications made this way. It's called vibe coding. So in a sense, I was vibe coding before I approached a computer.

  • Sarah Wilson

    Yeah, before it was a thing. I also read somewhere that you wanted to -- and I really love this, Audrey -- you wanted to learn everything you could and share it with the world. You must have kind of said that at quite a young age. How old do you think you were when that idea came to you as a vision?

  • Audrey Tang

    Yeah. Around four or five years old. I strive to be a good enough ancestor, so that even if I don't wake up the next day, fine -- because I open up more possibilities to the world compared to the world that I was born into.

  • Sarah Wilson

    Yeah. Right. Okay. So you were also bullied -- I read this, and I think it probably is part of just being a smart little child with sensitivities that were very particular. But you left school and went to a non-academic school, I think with the kids of tea farmers up in the mountains. And you went two days a week, but the rest of the time you were reading psychology and philosophy, in part from what I've read, to understand how bullies think, what it was that motivated them to bully somebody. I find that really interesting. I used to sit up in a tree and do the same, sort of try to nut out what it was. But I'm just wondering -- I think it's a really good time to sort of ask someone like you: what conclusion did you come to as a young child as to why people bully other human beings?

  • Audrey Tang

    I was bullied when I was eight, and the bully who led this campaign explicitly said that if I die, they would be the top of the class. So I eventually transferred to that small school that you just described when I was nine. I studied in six primary schools in six years, dropping out at the seventh grade. But the point here is that, yes, he was top of the class after I transferred away. But objectively, his score didn't change -- only the relative status.

    And so my conclusion was that if people build their self-worth in something that's extrinsic to them, then it's always unstable and leads to anxiety. But if somebody builds their self-worth in something intrinsic -- like my curiosity, my collaborations with people, and my civic care -- then things are like civic muscles that we can train in ourselves. It doesn't rely on competitive comparisons. Then people tend to be more confident and less bullying.

  • Sarah Wilson

    That's kind of like -- I hate this phrase, but it comes up a lot -- killing a lot of birds with one stone. Right? And we're going to get to that idea of building up civic muscle. That's a really nice way of viewing things. And I think, as I say, with where the world's at the moment -- we have a lot of bullies in charge -- it's a mindset that's worth bearing in mind.

    But you mentioned you did drop out of school. I think it was at 14. You had no qualifications. You didn't wind up going to university, but you were working with Harvard and Stanford even at that young age, sort of collaborating with them online. And there was a project that you were working on, on swift trust. And you were looking at how to leverage the fact that humans trust strangers online so readily. And I quote you here: "to build places that seem more safe to face uncertainty together." So this was really important to you as a young teenager.

  • Audrey Tang

    That is correct. Because, as I mentioned, I live in this uncertainty of not knowing whether I will wake up. So naturally, I have found that if there is a place that makes the civic actions not individual -- like, not just me protesting something, but rather us demonstrating something together -- then these kinds of spaces make trust very quick. And so people deeply trust each other because to give no trust is to get no trust, and by trusting each other so radically -- to continue our work even after we die -- this builds solidarity very quickly compared to the more traditional way of a career ladder and so on.

  • Sarah Wilson

    Yeah. Well, Audrey, I think that's a really good point to come in with the Wild question that I have for you. And I'd love you to try to answer this. It's a really big one. It's a big existential one that I think is burning for a lot of people. That is: can we, as good humans, take back control of AI and steer things so that it can be used for good? Can we leverage it, muscle it off the tech bros who are sending it off in a direction none of us have signed up for, and use it in a way that actually serves humanity at a juncture when we're really needing all the help we can get? So as I say, it's a big question. But are you comfortable maybe just launching straight in with your intuitive answer, and we'll dig down from there?

  • Audrey Tang

    Yeah. Certainly. I think of course we can compost their model. The digital tools built initially for extraction and so on -- but the same code can be flipped so that it's built for overlap, not outrage.

    For example, many people have the experience online that only the more extreme voices get a megaphone. And that has been the natural consequence of an AI paradigm around ten years ago, adopted by all the major social media, that prioritizes engagement through enragement -- because rage gets more sticky to people. And so, by flipping that virality code, in Taiwan for the past ten years we've been able to make viral the overlap, not the outrage.

    So for example, two years ago, many people in Taiwan saw deepfake advertisements online -- like Jensen Huang, the NVIDIA Taiwanese CEO, promising to sell you some cryptocurrency cheap, or investment advice. If you click, it's Jensen talking to you -- but it's not Jensen. It's a deepfake of him, running on NVIDIA GPUs.

    And so we sent 200,000 text messages to random numbers around Taiwan, and people volunteered to join in tables of ten. Each person talking to nine other people. And the ground rule is that the extreme voices don't make it out of the room. You have to convince the other nine people of what is a proportionate thing to do.

    And so people came up with good ideas. One table says, let's label all advertisements as "probably scam," like a cigarette label, until somebody digitally signs off on it. Another says, if Facebook posted an unsigned ad and somebody lost five million, Facebook should be jointly liable for the five million because they certainly did not subscribe to that ad. Another good idea. Another one says, if TikTok at the time did not have a Taiwan office and wants to ignore our liability, what to do? Well, we'll slow down the connection to their videos by 1%, and every day they ignore us, another percent. Another good idea.

    And so those became law after just a couple of months. And throughout last year, 2025, there were just no big deepfake ads anymore on Taiwanese social media. It's down by more than 95%.

    And so my point being, just by switching this very simple rule -- making viral the overlap, not the outrage -- you can lean into swift trust and then action, to draw the red lines around emerging AI like deepfakes.

  • Sarah Wilson

    So the aim was to sort of weaponize this swift trust, but send it off in a good direction. And you've shown that it can be done. You've done it in Taiwan over the course of eight years. Well, kind of longer because you were an activist doing this kind of work before you became a minister. But I wouldn't mind backtracking actually, and getting you to talk us through how you got those campaigns off the ground, and maybe refer to some of the specific campaigns that show what can be done with technology when it's used in a cooperative way. So do you want to give us two or three examples of some really great projects that give a feel for listeners as to the work that you've done -- that sort of mindset, that approach can then be used for what we're wanting to do here, and that is wrestle back control of AI.

  • Audrey Tang

    Yes. I remember in 2014, the approval rating of President Ma Ying-jeou at the time was at 9%. And so in a country of 23 million, anything the president says, 20 million people are against him. But the problem is those 20 million people themselves are deeply polarized. One faction wants to free China. Another faction wants to be free from China. And they're equally strong. And so there's a lot of fighting.

    And so when the president and the parliament tried to pass an agreement with Beijing that would open up our cybersecurity, telecommunication, publishing, and broadcasting to Beijing investments, it triggered what's called the Sunflower Movement. So we peacefully went into the parliament, occupied for three weeks -- but not just as protesting, but rather as demonstration. So more than half a million people on the street and many more online joined these kind of small table discussions in many groups. We livestreamed many of them, talking about particular aspects of the trade deal. And every day, we read the uncommon ground -- the surprising middle ground that people managed to agree on the previous day -- and what's remaining to be debated.

    And after three weeks, we converged on a set of very coherent answers. The Speaker at the time, Wang Jin-pyng, simply said, "Well, the people's version is better than our version, so we adopt that -- go home." And so that is one of the very rare cases where the occupation actually converged, not diverged.

    And so after that, we the activists were tapped to join the cabinet as reverse mentors -- youth advisers to cabinet ministers -- so that we can solve emerging issues like Uber without occupying physical buildings, moving that online.

  • Sarah Wilson

    And that was a system that you built, a technology system that you built to enable what your vision was. And so Uber was about to arrive in Taiwan, and everybody had a take on whether it should be allowed, what kind of rates, what impact it would have on the local taxi industry, etc. So you basically got Taiwanese people to get involved in a sort of co-creation of policy with the government around this.

  • Audrey Tang

    That's correct. And the tool was built by people who were involved with Occupy Seattle. So from the Occupy network, not strictly in Taiwan, we helped of course to translate that, to mobilize the local adoption. And the option is that you at the time would go on Polis and see a statement that says, for example, that the surge pricing is great, but undercutting existing meters is not so great. And then you can agree, you can disagree, and you see yourself moving toward the group of people who share your views, and you see what unites them. You also see some other communities with opposing views. But also, you see the overlap where the left wing and the right wing both agree on these connective tissues. And these statements, what we call the uncommon ground, became law.

    So after three weeks, we solved the Uber problem -- not by choosing a winner or building a national champion, but simply saying the rural people need to be served too, with proper insurance and so on. So local co-ops and unions can also tap into something like Uber with dynamic pricing. Uber landed as a newly registered taxi fleet, but the taxi fleets themselves got much more flexible. So that's a win-win solution crowdsourced by the wisdom of the people.

  • Sarah Wilson

    Yeah, right. And I know that some young people were able to get involved in politics, and they put forward a really great case for why school hours should shift an hour later. They put forward the evidence that showed it was better for young people's brains, they learned better, and so the law changed. I mean, there are so many examples of what you were able to do -- policies that affected AI, consumers, even trade policies per the first example. And what I would like to know is: how many Taiwanese got involved in this process? Was it just a couple of thousand each time, a couple of hundred, or was it much more than that?

  • Audrey Tang

    So we institutionalized this process when I became the digital minister in 2016. Any case that has 5,000 people petitioning online can force a response from a minister. So each case, as you say, is a few thousand people. However, throughout the past ten years, I think around 10 million people were involved one way or another in one of those digital participation campaigns. So around half of our population. And also, it enabled people who did not have voting rights -- for example, very young people, immigrants, people who are vulnerable in many other ways -- to finally have a voice and set the agenda for the country even before they turn 18.

  • Sarah Wilson

    Okay. So I think that gives people a really good understanding of how you are using technology to get civilians involved in policymaking, and using AI to do this. You use a phrase -- and you actually just mentioned it at the beginning of the interview -- you used the word "composting," which is a wonderful Joanna Macy term. A few people in this space use this term, and I think it's a wonderful word. And in your next book that you're working on -- and I've read the working paper, I'll put a link to it for everybody in the show notes -- you use the phrase "techno-communitarianism can compost technofeudalism." So I think that's kind of what we're at here. We're trying to work out if that is actually possible. You're saying it can. Can you explain that phrase? Can you explain how it works? And then I'm going to drill down with you with some "what abouts" that I have in the back of my mind, and hopefully I'm representing what listeners and viewers are thinking. So tell me what you mean by this idea that this communitarian use of technology can compost this dark use of technology -- the technofeudalism, the technofascism, the tech bro takeover of technology. How does that work?

  • Audrey Tang

    Yes. To me, using tech to foster communities is more than putting human in the loop of AI, which would be like putting a hamster in a hamster wheel. The hamster, you know, runs on lots of dopamine, but there's no steering, and the wheel isn't going anywhere. Instead, it's about taking AI to empower the humanity loop -- AI in the loop of communities.

    So it's not about being nice to tech bros. It is about composting the building materials of their model -- the same digital tools -- and then just rewiring them using free software, open protocols, and local models that serve the neighborhood and not the shareholder.

    For example, we built the Polis bridging algorithm that makes viral the overlap. And the same algorithm is now empowering Community Notes on X.com, on YouTube, on Meta, where people can post notes attached to a viral post. But the note only gets visible if it gets upvotes from both sides of the political aisle. And in upvoting and downvoting, there is no dunking, no quote-tweeting, no replying even to each other's community notes. So people can build swift trust by taking a group selfie and seeing what is the unlikely common ground between opposing parts of their community.

    And so that bridging system is one of the composted moves. It is now getting very mainstream. Community Notes are now being drafted by language models that communities themselves help train. And so instead of just seeing a fact-check that comes too little too late, you can see a note attached almost right away when a post is going viral. So this is what we call prebunking, not debunking. Instead of waiting to clarify something, you prebunk -- and the prebunk travels with the viral post.

  • Sarah Wilson

    Before it becomes fake news and gathers a momentum of its own. You're talking about it in a way that suggests that this is already happening.

  • Audrey Tang
  • Sarah Wilson

    So can you just explain to me and all of us listening: where is it happening? Is it alive and well? You were talking about it almost as though some of these AI companies are happily adopting it. Is that the case?

  • Audrey Tang

    The Community Notes team on X.com -- well, now it's xAI -- has rolled it out literally this week, the week of mid-February where we're recording. And Zuck, I think, said Community Notes work better than the third-party institutional fact-checkers they used to work with.

  • Sarah Wilson
  • Audrey Tang

    Yes. And so I think this is a very simple case. It's a Buckminster Fuller move. Right? It's not to attack the old model. It's to build a new one, like a trim tab, and then just trim tab and pivot into the new one that makes the old one obsolete. That's composting.

  • Sarah Wilson

    That is -- okay. Got it. So you take the bits that are a bit rotten and let them feed the new version in a productive, vibrant way. Well, that's really heartening news to hear. I mean, gosh. You've actually got them to think that it's probably their idea. This is your tactic, isn't it? You go in there. You work alongside an organization. You identify where they're not quite making something work. And you make the new, more pro-human, pro-social version of things that much easier and palatable and sexy that they go, "Oh, okay" -- like the Speaker in Taiwan said, "Ah, well, you sort of have a point. It's a great idea. We can't argue with it. We'll give it a go."

    Very surprised to hear about Zuckerberg and X taking on these ideas, but obviously very heartened. Okay. I've got a feel for what you mean by "compost technofeudalism." But what I'm going to have to do is hit you with a couple of "what abouts" that are floating around in my head. They've stalled me in this whole realm. I've spoken to various people -- Vanessa Andreotti, other people who are really trying to work with a more positive version of AI. And these are the kind of questions I find myself asking such people.

    So I want to know whether this idea of taking AI in a positive direction, should we be able to pull it off with your wonderful ideas, is it materially and ethically possible? Is there enough viable, cost-effective, affordable energy that won't crash the financial system? Is there enough energy to do this? Are there enough rare earth minerals? And is it ethically possible to do what you're talking about?

  • Audrey Tang

    Yeah. So the energy consumption of training AI models is only large if you don't know what you're doing. That is to say, if you train for a so-called general intelligence -- something that not only can do math but also memorize all the Studio Ghibli movies or whatever -- then of course it's very expensive to train such models. But if you look at the actual models that are used in community facilitation, they don't actually need to memorize all of that.

    In the table of ten, the facilitating model only needs to know enough to translate -- socially -- "climate justice" with "biblical creation care." These two communities share overlap. And these are very specific models that can train on specific communities' input. So to summarize, to translate, to facilitate -- it's almost like a glorified chess clock. It does not require a large model. In fact, studies have shown that tiny models are better -- less is more -- because they don't hallucinate as much as the large models.

    So it can use 1,000 or even 10,000 times less energy if you want to do specific-purpose things compared to general-purpose things. If people know what they are doing and also know what they want their AI models to do, then a collection of 100 small models -- even if you train all of them -- is still much, much less in energy consumption than training one super galaxy brain.

  • Sarah Wilson

    And would you say that it's comparable with general Google searches -- the energy that's used for general computing services that operated before AI? It wouldn't be blowing it out by too much?

  • Audrey Tang

    Yes. If it's a narrow model, then it actually runs locally. And the good thing about things running locally is also that then you're not dependent on the feudal storage -- also called the cloud -- and you own your data. All your data also has a copy locally, even before you send it out.

    For example, before you send out an email, maybe you draft it on your computer. If the drafting is done by a cloud AI, then that cloud AI also has a copy of all your thoughts before the email goes out. But if the drafting is done, as I do, locally by a narrow model that knows my writings but not Studio Ghibli, then it stays very secure and safe. It kind of doesn't matter which AI model vendor you use, because all the data is local and the training is done locally as well.

  • Sarah Wilson

    Got it. All of that makes a great deal of sense, and I can see that's what we should be doing. We don't need the level of sophistication that the tech bros are talking about with their AI. Singularity is not necessary, nor is it something that most people want. What you're saying is we need to dial down what we want AI to do to actually serve us. And generally speaking, the change, the world that we're going to have to inhabit going forward, is going to have to be localized, because that is where change comes about.

    The question I have for you, though, is: how do we get it back off the technofeudalists who are on a hell-bent trajectory to making this as big and cloud-like as possible? They don't care about what resources get used up. And in fact, I think that's part of their grand plan -- to extract, extract, extract and collapse the whole system so that they are left with the remaining techno-system.

    How do we get around that? Because part of all of this is that the neoliberal colonialist patriarchal framework -- AI emerged from it and replicates it. And the system has kind of eaten up all of this. It eats up any models that try to counter it. And I'm wondering how you think that we can go from this idea that makes tangible, humane sense to actually presenting that to the techno-overlords and saying, "Well, this is actually a better way of going about things." Because as you know, they own the social media. They have algorithms that keep us distracted and addicted. They've taken over children's minds. They own the surveillance technology. And they're all working in cahoots. The PayPal mafia is real. So that's the bit that I think a lot of people are struggling with. I love the sound of all of these ideas, and that is where we should be heading. But how do we counter this behemoth?

  • Audrey Tang

    Well, a month ago or so, Reuters had this exposé that says after Taiwan solved their deepfake scam issue with joint liability -- so that Facebook and other platforms cannot profit from scam ads anymore -- the share of revenue generated by scam ads actually went up for Facebook. And the reason is that Facebook reroutes those scam ads to nearby jurisdictions that have not yet passed those laws. And because of Taiwan's effort, Facebook now knows which ads are scams, but they don't disclose that to the police officers in those other jurisdictions.

    And so there's now a time gap between Facebook knowing there's a scam ad and the target jurisdiction's police knowing about it. And during this -- like thirty hours or so -- Facebook massively increased the share of the incentive that they collect from the pay-per-click of those scam ads, because they can. Right? So their revenue actually grew. So this is what's called a perverse incentive. It actually makes those target jurisdictions, like Japan, see more scam ads as a result of Taiwan passing this law.

    And so I shared that with a few former digital ministers of Japan, and they're saying, "We're going to do what Taiwan did in another month or so, because this is too much."

    So I think you're pointing to a real dynamic. If the algorithm that the advertisement engine is using is told to maximize revenue and nothing else, then it approaches that in a way that is literally careless -- like, none of the care principles apply. However, once we make it common knowledge that actually joint liability fixes that very easily and quickly, and involve people in exactly how that joint liability should work, then we can regulate them. It's actually not rocket science.

    We have done that before -- mandating portability. When you switch one telecom to another, you can take your number with you. So the telecoms have to compete on quality, not on capture. We have regulated the freons, the refrigerants that depleted the ozone layer, by making sure that the regulation forces a technological pivot. So some years after signing the Montreal Protocol, any refrigeration company making the old freon would be committing a crime against humanity. We've done all that before, and this is not particularly different.

  • Sarah Wilson

    Except I would say that the tech bros also have their dirty fingers all the way through politics. And we can see in the US that they've got a very close relationship with the Trump administration. So being able to make legislative changes like you were speaking of has become that much more difficult. I would say almost impossible. We've crossed some kind of threshold.

    So is that something -- I know that in your next book, you talk about these awkward uncles who represent the three different arms of the AI movement. And they've each got a flaw in their argument. It sort of speaks to what you're talking about here. Basically, they're missing the human element. And so humans are always going to want to gravitate toward a more humane, relational kind of option if they're presented with it.

    So I'm going to sort of steer you toward an answer that I've come across within your work, which I don't quite understand. But you do point to greater Asia. You sort of meet people like me who've got these "what about" questions, and you say, well, okay, we're talking about the US, China, and Western Europe. That's sort of what most of us have been concerned about. But you're saying that these other countries -- such as Singapore, India, Japan, and of course Taiwan -- are actually behind the scenes doing their own thing and making some changes and choices that could shift the argument. Do you want to speak to that a little more? Because when I read about that, I thought, oh, that could actually answer this piece of the puzzle.

  • Audrey Tang

    Yes. Definitely. So there are three uncles that each come from a different house, a different college, if you will. And they think that their particular future is the future, and what we are in now is just a transition phase.

    One house says exit is the future. So maybe people who don't like other people can just form a new nation. First on a new island, and eventually on the Moon, on Mars, many other planets.

  • Sarah Wilson

    That's the Elon camp. Yep.

  • Audrey Tang

    So exit. Let's just build somewhere that you don't have to share the atmosphere, essentially, with the rest of the planet. That's one thought.

    Another thought is what I call the provision house, which says that machines will automate every aspect of production of value. So we basically just receive universal basic income and watch VR entertainment, or whatever. And have no voice, no labor, no meaningful work -- but plenty of abundance. That's another thought.

    And the third thought, called the safety house, is saying that AI is Terminator's Skynet, going to kill everyone. And before that happens, it's much better if we build a global military dictatorship that can monitor every move from every human. So that we don't get killed by AI -- instead we just get dominated by AI, which I'm not sure is better.

    But anyway, the point is that all three assume that there is this huge apparatus -- the entire Mars colony, or the entire Skynet, or the entire anti-Skynet -- that directly mediates between individuals and this large system. There's literally nothing in between. There's no church, no temple, no communities, no unions, nothing. Right? Exactly. But it's not the only three houses. There's also a Hufflepuff, right, which doesn't produce dark wizards.

    So if we can go instead to a community house -- the community college instead of the other three -- then the argument here is that in Japan, Taiwan, Singapore, India, especially in Taiwan, the more highly educated you are, the more likely you openly practice a faith. I'm openly Taoist. My premier, when I was minister, was openly Catholic. VP Chen Chien-jen literally said God talks to him directly, during parliamentary interpellation. And no country in Asia has a dominant religion. Every religion is like one quarter or less of the population, so it's very diverse. But the point is that spiritual and civic communities keep our social fabric together, so that when emerging technologies come, we don't get pulled apart as in other Western societies.

  • Sarah Wilson
  • Audrey Tang

    And that is the main argument. We can use technology to foster community, so the wildfire of anti-social media gets tamed, domesticated into small fireplaces, into campfires, into bonfires -- where we can see each other very clearly, but not the wildfire.

  • Sarah Wilson

    Got it. I talk about having moral umpires that used to be able to monitor and place guardrails around what mattered to us as humans. And moral umpires included the church, and scout leaders, and community leaders, and philosophy, and thinkers who could actually say, "Hang on, guys. Let's think about this. Do we really want television? Do we really want the Internet?" And those have been wiped off what I call the footy field of life by neoliberalism. They said, "We don't need these things." And so it's no wonder that the US, Europe, Australia, and the UK, and increasingly China, are now kind of guided by this idea that we don't need that middle level, those moral umpires that hold the fabric of society together.

    So if I'm hearing you correctly, Audrey, what you're saying is that's one model that's supremely awkward and problematic and is going to see the downfall of humanity because we don't have these moral umpires. But you're saying that greater Asia has got another thing going on. They've managed to hold on to these traditions which are in turn able to monitor to what extent AI comes in and takes over.

    So how do we take that model and spread it? Or do you feel that what's going to happen is that we're going to be so desperate, we're going to be looking over at what India and Japan and Taiwan are doing and going, "We're probably going to have to do something along those lines"? Are they just going to become more dominant because they've got a better model that's more humane and attracts people to their methodologies? I'm putting words into your mouth, but I'm just trying to understand how that can spread -- because sitting somewhere between Paris and Australia at the moment, I'm feeling like the awkward uncles have really taken over the planet.

  • Audrey Tang

    Well, you mentioned a while ago that in the US, the legislature has been captured, so that the umpires have no voice. And the empire has more voice, I guess.

    But in our experience, for example, we work with the state of Utah and Governor Spencer Cox, who made international news during the Charlie Kirk tragedy, and who said essentially that social media is to blame. And it's not just that -- he also demonstrated how to fix it.

    Starting this July, if you are a Utah citizen and you want to move off, say, TikTok into, say, Bluesky or Truth Social -- those are open-source -- then the old network has to keep forwarding new likes, new reactions, new followers to the new network. Exactly like how the telecommunication companies offer number portability. And when Governor Cox signed the Digital Choice Act, he was joined by people from both parties, so it's a true bipartisan agreement in Utah that technology must foster communities.

    Now, of course you can say, well, Utah is small, it's Mormon, traditional, and so on. But the fact is that once Utah starts doing so, you'll see it being seriously deliberated now both in North and South Carolina but also in Vermont and New York -- because Utah isn't colored coastal. It is colored communal. And so it has a real way to spread.

    So my point is that if we want to foster the uncommon ground, then you need to start where the bridges already are.

  • Sarah Wilson

    Do we have time, Audrey, to pull this off? Given the clock is ticking, the awkward uncles are getting a lot of traction and a lot of power. I write and talk a lot about collapse -- the systems basically being too complex to be able to go on as they are. How do you feel about that? Do we have the time to use human goodness and humankindness to switch this whole thing around?

  • Audrey Tang

    Well, I mean, when you're composting -- when a new infrastructure is growing, reusing the components of the old -- at some point, the old becomes obsolete and probably crumbles. So collapse is maybe the right word if you're focusing on the old institutions and the old system. But that's fine. Because we already have then the spaces of resilience within collapse -- islands of sanity, pockets of good. So that's exactly what we're building.

    So the idea here is not that we replace one AI singularity with another AI singularity. The point is that we, the people, are already the superintelligence. And once we foster our connections using technologies, we can build communities through swift trust even across very wide geographic distances -- like entanglements between one hyperlocal community and another. Each dedicated to the flourishing of their own hyperlocal area, small, but those many islands of sanity can connect together. When each has built for enough -- not forever. When each is satisfied -- just become good enough ancestors. That is the logic that will counter the singularity. So when things collapse, that's fine, because the resilience network holds the space.

  • Sarah Wilson

    Yeah. That brings together so much of what we've been talking about in the last series of Wild that I've been writing about for a number of years now. And that's this idea: we can't fight these big awkward uncles. We also can't stop the collapse that's happening. But what we can do is, A, build these islands of sanity -- to use the Meg Wheatley line -- so that we are steadfast, sturdy. We've got our humanity intact. We're together. We're talking about the kinds of stuff we're talking about here. So we do that. We basically get ourselves strong and sturdy.

    Then at the same time, as the old system is dying, which it is, we've got no control over that, we kind of come in like a bunch of worms. And we eat up the bits that still have nutritional value. So we take from the old world what can serve us as we go into the new world. I mean, that's what you mean by composting. So we're not going to actively destroy. We're not going to resist the destruction. We're just going to come in and take what can still serve us, in a polite way with a big smile on our face, so that the awkward uncles don't even know what's happening.

    And the organic, emergent, entangled way of life sees a movement from this technofeudalist concept into the techno-communitarian concept. And I know there's a word that you used from early in your activist days: "fork the government." That was what you were doing with the Taiwanese government. You were coming along, and you were building technology alongside their technology -- their PR campaigns for a new policy, nobody could understand it. So you just came in, big smile on your face, not being antagonistic. You basically just sat alongside them. You forked out alongside them and just wrote out their policies in a way that people could understand. So people started going to your site.

    And the government -- I mean, it's a little bit illegal because you're kind of hacking. That's what you're essentially doing. But because the government could see it helped them, they didn't push you away or arrest you. In fact, they invited you in. And is that kind of what you're seeing happening with composting technofeudalism? It's like, we can't beat them, so we sit alongside them, we do it better. And because we're so sturdy, and because we've basically got this big advertisement that says "pro-human," people are going to gravitate to us, not them.

  • Audrey Tang

    That is exactly correct. To make the gravitation easier, in Taiwan in 2012, a bunch of my friends registered this domain name: g0v.tw -- "gov-zero." So for each government website, which always ends in something.gov.tw, that we think can work better, we fork and make exactly the same site -- but at something.g0v.tw. So changing the "o" to a zero in your browser lands you in this forked government that works better.

    And because we're always radically transparent, open-source, Creative Commons, public domain -- so when the government sees, "Oh, actually, it's a better way to make a multilingual dictionary" -- because in Taiwan, we have 20 national languages including 16 indigenous languages and sign language -- so we put them all together, and the government was like, "Oh, that's actually a better idea." Then all they have to do is merge it in.

    The same with COVID. The g0v people invented a better way to visualize the next available mask, to depolarize about mask use, better vaccine registration flow, contact tracing without sacrificing privacy. And each time, instead of protesting something is bad, we demonstrate something is good. And then after a few days, the government merged it in, and it became national infrastructure.

    So this is exactly like how the Polis bridging algorithm finds its way into Community Notes, into the X ecosystem -- which will hopefully make the feed itself more depolarizing. The more you look, the less you feel polarized. And that may also feed the uncles better.

  • Sarah Wilson

    There are two things I just want to bring up there. First of all, I should mention that Taiwan wound up having one of the lowest mortality rates in the world. And I think very much as a direct result of how the country was able to come together in and around these policies. They bought into it. They were co-collaborating. And I think during the height of COVID, you mentioned before that trust in government, when you first started, was at 9%. I think it reached 91%, right?

  • Audrey Tang

    91% for the counter-pandemic effort. And 71% for the president. So if we're comparing president to president, that's 9% to 71%.

  • Sarah Wilson

    That's a huge leap, particularly at that time in history when a lot of trust in government went down. So yeah, your ideas really do give me a lot of pause, a lot of reason to feel that there is something going on here.

    The second thing I wanted to say -- I'm thinking of something that Seth Godin told me many years ago. He was saying the most effective way to get people to switch -- and he was talking about consumer choices because that was the era, probably fifteen, twenty years ago now -- is to basically present an image that says, "People like us do things like this." So it's not about desperately trying to convince anyone. It's about getting on with the thing that you are promoting. And basically not looking miserable while you're doing it -- looking like you're actually enjoying it, that this is the way to do it. And it's the islands of sanity concept. We basically become an advertisement for the thing that we're saying is going to come.

    It's a hard thing to have faith in, isn't it? Because it's not the way the linear material mindset has worked for the last 200-plus years. But I think on an intuitive level, we as humans feel it to be right, feel it to be a possibility.

    So I'm going to ask you a penultimate question. It's not something that I feel comfortable asking people on podcasts, and in fact in any kind of public realm. But do you subscribe to these ideas? Do you live out these ideas because you feel that there's no other way to be human at this time, at this moment? Because you also have a sense of service -- you want to be of service to humanity, you want to share these ideas. Do you really adhere to these ideas because you feel, well, in the absence of anything else, this is what we need to be doing? Or do you feel that they really are going to work? Or is it somewhere in between?

  • Audrey Tang

    Well, I think these are going to be good enough. They're not going to work perfectly. Because, to quote Leonard Cohen: "Ring the bells that still can ring. Forget your perfect offering. There's a crack in everything -- and that's how the light gets in."

    If we become perfect ancestors, we design away the choices of our next generations. But it is my belief that seven generations down the line, people are going to be unimaginably wise and smart compared to me. So who am I to over-prescribe what to do? My job here really is just to satisfy, to be good enough, to open up the canvas a little bit more for the next generation.

    My reverse mentors, when I became cabinet minister at 35, are all under 35. And they are much more wise and intelligent and curious and collaborative than my generation. And I don't see that trend stopping. And so to your question directly: yes, I believe these are going to work for one generation. And that's good enough. I don't need to worry that the next generation, the generation after that, will adhere or not to these ideas -- because these are provisional anyway. They're going to face a very different world, and they're going to come up with their own ideas.

  • Sarah Wilson

    Thank you for understanding the gist of my question and answering it as you have. I take it as: you're in the muck and the mire of it, and you're doing what you ought to be doing as a good human at this point in history. And where it heads -- I mean, I feel some optimism in your methodology, in your approach, because at the bare minimum it allows us to be human as we do this. It allows us to be fully human at this moment. So I get it. And there are no promises, of course.

    Audrey, there's a question I'm going to be asking every guest in this series. And it's sort of a way to wrap up. What are you doing right now to stay human in this? To cope, to get through this, to tread from one day to the next. I'd love you to be as specific and intimate as you like in terms of what you do, what mindset you adhere to.

  • Audrey Tang

    Right now, I'm looking at you with only a hint of color. You are mostly grayscale to me. And that's because all my screens have the setting called "color filter" on. And the thing it does is that it lowers what I call the PPM -- the Polarization Per Minute. So not the carbon parts per million, but on social media and with notifications and so on, you can just feel that much of the design is trying to increase the polarization per minute, which makes swift trust and collective action very difficult.

    And a personal defense against that is simply to make reality more bright, more colorful, more real than your screen. Because your screen is artificial light -- it was designed to be more vivid than the background reality. So just turning on the grayscale filter makes your phone, your screen, the background. And reality, the foreground.

  • Sarah Wilson

    Wow. Is there science behind that? I imagine that it's about tapping into dopamine levels and the distraction of bright, poppy colors that pop up everywhere?

  • Audrey Tang

    Yes. I'd love to contribute to the show notes after this. But the point is that once you start doing this, you will feel a sense of calmness that makes doomscrolling almost impossible. So none of those AI-generated videos are going to mobilize you. And my eyes start going to the window -- the trees outside, the sun is shining outside now. In many other places instead of this small screen.

  • Sarah Wilson

    Gosh. That is like a silver bullet. I'm going to give that a go.

    When we were fiddling around trying to get my cameras sorted, you did say to me that one of the ways you remain so thoroughly human in what you do is that this is the way you've had to live your life since you were told at the age of four that you could die. Do you mind sharing a little bit of that?

  • Audrey Tang

    Sure. So when I was four, I learned Taoist breathing -- qigong energetics. And the core of this is essentially that if my heart beats above a certain beats per minute, then I just faint, because of my heart condition. So I cannot feel too much joy. I cannot feel too much upset. I cannot feel outrage, because it literally kills me.

    And so by breathing and listening deeply, I find myself having much more room in my mind. I learned very early on, when reading a book or listening to somebody talk enthusiastically: don't interrupt them. Neither the author of the book, nor the speaker, for five minutes. Still today, I can only do about five minutes without any interrupting, judging voice in my mind. But if I've had good nights of sleep, I can kind of take a little mental note and then continue on for fifteen minutes or so.

    But that is quite profound, because it means that attentiveness is what we need. It is not attention. It is not a super-focused attention. It is more like this ambient attentiveness that lets you completely merge your horizon with the book or with your interlocutor. And when the other person experiences this deep listening -- being listened to, attended to for five minutes -- something in them also changes, and their heart rate also goes more calm. It's like co-regulation.

  • Sarah Wilson

    As you were saying all of that, I could feel into what you were saying, into that attentiveness. I was also feeling the errors of all my ways -- I've got this excitability. I would be long dead if I had your condition. But that's a wonderful lesson. And I imagine also, living with a condition like that, you had to be steady to survive. You were also no doubt lonely, and you had to go about working out how other humans operate to be able to do what you do. Love the picture that all of that forms.

    Audrey Tang, you are an incredibly inspiring person, and I feel very honored to have had this conversation.

  • Audrey Tang

    I really enjoyed this. Thank you. Live long and prosper.

  • Sarah Wilson

    Thank you. That was wonderful.