
Welcome back to the live stage. I am Joe Wang, you just saw me a while ago.
Audrey Tang, thank you for joining us. Channing Lee is here with me as well.
Audrey, you have been at the AI+ Expo every year.

Every single year.

Just like us.

We do it just for you.

You also come to Taiwan every year.

That is true, at least once a year.
You have been in the middle of both the AI conversation and the expo from the beginning. I would like to hear your reflections on the last few years. What has changed? What have you seen that's new, especially with AI developing so quickly?

This is a strong convening place. I was just telling Channing how good it is to bring together people who approach AI from very different starting points.
Some people think about AI in almost purely mathematical terms. For them, alignment sounds like a formula that can be solved, after which the system becomes fully reliable. Other people are much closer to practice: policy, competition, institutions.
Three years ago those groups often did not speak the same language. Now, because of SCSP and gatherings like this, more people understand that real AI alignment also depends on the social infrastructure around the system and on public awareness. That awareness is at a very high level today.

Before we go further, I should introduce you properly. In my excitement to have you here, I skipped the setup. People know Audrey as a badass, but for anyone who does not: you were Taiwan's first digital minister and now serve as Taiwan's cyber ambassador. It is always incredible to have you here and to get your perspective.

Audrey, I think you are a wonderful voice for Taiwan around the world. People speak with you and come away in awe: here is this genius going around showing what Taiwan can be. I always love seeing you, whether around the world or here at the expo.
Because you have been here three years in a row, just like us, was there anything new this year that surprised you?
Last year, when we recorded a podcast, you said you wanted a boba station. We failed on that, so I apologize.

But we did have Taiwan Day as part of Day 0.

And you joined us at the 0-Day Taiwan Day. What did you like this year? Was there something cool you want to talk about?

On Day 0, I especially enjoyed the conversation with Anka Lee (former Deputy Assistant Secretary of Defense) about the civic layer around AI adoption, or the civic muscle that has to be built around it. In the U.K., some people are calling this "muscular adoption."
The idea is straightforward. Instead of assuming there will be one giant AI system in the cloud that everyone subscribes to, people are realizing that this is not always the right direction. A single huge opaque model that folds proteins, folds laundry and claims to fold everything else into one box is not necessarily what communities need.
What we are seeing now is more narrow AI, edge AI and domain-specific AI. These systems let people tune the tool to their own needs.
A good recent example is OpenAI and ROOST's safeguard-model work. The model can compare an AI decision at supersonic speed with a pre-approved rule set and ask whether the autonomous action stays inside or goes outside the boundary.
The important part is that this is small and explainable. It can point to the relevant part of the pre-approved agreement, whether with the Department of War or any other user, and indicate why the action is inside or outside the boundary. It can do that quickly enough to support uses that were mostly speculative a few years ago.

Another major trend is agentic AI. You are a world-famous hacker and coder. How is this shift affecting how you do the things you used to do? Have you built AI agents? Are there agents you can share, maybe for travel or jet lag, given how much we all move around the world?

I have been to 28 countries over the last couple of years and I have not had jet lag, only jet boost. But that is not because of agentic AI; it is because of the Timeshifter app, which helps me change sleep patterns with light exposure, coffee and similar cues.
That connects to the earlier point: the useful system is a specific, scoped tool that takes care of a specific relationship.
Taiwan had a large surge in online deepfake scams a couple of years ago. We did not respond by saying that censorship should be imposed top down by the government. Instead, we used what we call Civic AI: focused agents that listen to groups of about 10 people at a time and identify the uncommon ground between them.
For example, Taiwan now has joint liability for social-media companies. If they carry an unsolicited ad that is not digitally signed, and that ad is a deepfake that causes people to lose large sums of money, the platform can be held responsible.
There is also a rule for foreign social-media companies that ignore the liability framework. For each day they ignore it, Taiwan can slow connection to them by one percent.
But the point is not content censorship. The point is that Civic AI helped show broad public agreement around a practical anti-fraud rule. It supported summarization, translation, and social translation, so people from different ideologies could see the same agreement in terms that made sense to them.

I remember talking with you about this in Taiwan, though I have lost track of which trip it was. Do you feel these ideas are now being adopted more widely?
You had raised this idea at the Ministry of Digital Affairs: can some rules and regulations be enforced without turning everything into conventional law?

The law was made in 2024. In 2025, deepfake impersonation scams in Taiwan fell by more than 94 percent, so that problem is now close to solved.
Because of that, we started exporting the idea. In Japan, not only the LDP but also a new party, Team Mirai, uses broad listening as part of its platform. The people at DD2030 are also training edge AI systems for fraud detection without censorship.
The same broad-listening approach is now in California. Just yesterday, the state launched Engaged California as a listening practice. If you are a California resident, you can take part and share your concerns about how AI automation affects your work.
And the question is not only about jobs. It is also about apprenticeship, belonging, community and dignity. The process can find uncommon ground among Californians and inform legal or economic responses to frontier labs.

I appreciate how much work you do with California, and I always like giving my state a shout-out. We should also get you to Southern California. People in Northern California may already get it because of the tech bros, but Los Angeles and Orange County should hear these ideas too.

I would love to.

Stop schlepping for California.
One of the other trends SCSP watches is AI adoption: how people think about AI as a tool, and whether they see it positively or negatively. A Gallup poll last year suggested that Americans were almost evenly split, with roughly half fearing AI.
How do we make AI feel more inspiring and more like a tool, not inherently good or bad, but something people want to try, learn, and use to accelerate innovation?

The anxiety is justified. Many people's first exposure to strong AI-like recommender systems was on social media.
About 10 years ago, people began seeing those systems push unsolicited information to keep them doomscrolling. The pattern was engagement through enragement. People might post "dear algo, I did not want to see a certain kind of content," but the algorithm did not respond.
The steerability was very low. The system inferred revealed preferences, such as what content kept someone addicted, while ignoring stated preferences, such as the person's wish not to become addicted.
That lack of steerability is a major reason people fear AI.
In Taiwan, broad listening and Civic AI have been used to steer systems toward what people actually want, such as in the Uber case. In places where people do not see that kind of steering, the fear is understandable.
Much of my work over the last couple of years has been with frontier labs on this problem. Anthropic has gathered qualitative input from 81,000 people about their hopes and fears for AI systems. OpenAI and Anthropic have published model specifications or constitutions.
The next question is how to close the loop. If people say they are unhappy with part of a model specification, perhaps because they do not want the model to be so flattering or sycophantic, how does the lab return and show what changed because of that input?
That kind of model-specification or constitution steerability is the frontier. It can turn fear and anxiety into collective agency and civic muscle.

Eric Schmidt, our chair, used to say during the AI Commission period that we were at the beginning of the beginning. Now it feels as if we are at the end of the beginning and moving into a middle phase. AI is out in the world, people are trying to make tools more useful, and there is a lot of Global South engagement too. Yesterday, for example, we had an Africa AI village here.
As we wrap up, what are you most excited about over the next six months? What do you know is coming, or what do you most want to see?

I would point to social translation between communities.
A couple of weeks ago, X rolled out collaborative notes. In the past, if you saw a controversial post on X, you often had to wait hours before a Community Note clarified the context. Now Grok can draft notes that translate between communities.
For example, it might bridge a climate-justice audience and a biblical creation-care audience, writing a note that speaks to both groups rather than only one.
Previously, that mediator role, almost like waging peace, was extremely taxing for a person to perform. Now, the work can be drafted at industrial scale. Research is already showing depolarization when Grok drafts are steerable by people.
This pattern is spreading to Bluesky, and I hope it spreads to Truth Social and other corners of social media. AI-mediated exposure to social media may finally make prosocial media possible.

That is really interesting. The more you describe it, the more I agree. Sometimes it is hard for a human mediator to be seen as fully unbiased. If a machine can present both sides and aggregate the information, it may be harder to dismiss the mediation as partisan.

It is like a campfire. It helps people see one another more clearly.
That is different from the current wildfire dynamic, where everyone is shadowboxing with everyone else.

That is a strong note to end on. Audrey, it is always good to see you. Thank you for joining us and for coming back every year to the expo.

See you next year.