Unofficial reproduction. The official video record is at senparlvu.parl.gc.ca. Reproduced under the Senate's standing reproduction permission: accurate, not official, non-commercial, and not carrying parliamentary privilege. Copyright in the proceedings is retained by the Senate of Canada; remarks of senators and other witnesses remain with their respective authors. The repository's CC0 dedication does not apply to this document.
The Standing Senate Committee on Human Rights met by videoconference on Monday, April 27, 2026, at 5 p.m. ET, to examine and report on the impact of artificial intelligence on human rights and economic security in Canada, especially in relation to vulnerable groups and the international human right to work.
Senator Paulette Senior (Chair) in the chair.

I will now introduce our second panel. Our witnesses have been asked to make an opening statement of five minutes each. This will be followed by questions from senators.
With us at the table, from Knockri, we welcome Jahanzaib Ansari, CEO; and via video conference, please welcome Audrey Tang, all the way from Taiwan, the Taiwan Cyber Ambassador. Thank you both for being here. I will now invite Mr. Ansari to make his presentation, which will be followed by Audrey Tang.

Thank you for having me. I greatly appreciate the opportunity, chair and honourable senators.
First, let me acknowledge and thank you for your service to the nation. It is, indeed, an honour for me to be here today. I thank you for the opportunity to appear before you to discuss the importance of AI on human rights and economic security in Canada, especially as it relates to susceptible groups and the international human right to work.
My name is Jahanzaib Ansari. I am the founder and CEO of Knockri, a Canadian organization that has been operating for a decade. Our mandate is very simple: to ensure that each and every single Canadian has a fair shot and an equal opportunity, and that no one is ever held back due to unfair gatekeepers from the opportunities they rightfully deserve.
Our platform is used by large and sophisticated organizations across the private and the public sectors to support hiring, promotions and training. Our solution is an AI-based solution, as I was sharing, which we started about 10 years ago now. Collectively, the organizations that use our solution represent a workforce of about 3 million individuals. That spans from Canada to Tajikistan all the way to Thailand, so we’re covering quite a large span of a workforce in 10 years now. It’s extremely interesting how I started this company.
About 12 years ago, I was applying to jobs, and I just wouldn’t hear back from a lot of employers. I’ve got a long ethnic name, Jahanzaib, and so I wouldn’t hear back. I was very frustrated at that juncture. So I anglicized my name. I went from Jason, Jordan to Jacob, and literally, in six weeks, I got a job.
I’m not here to gain any sympathy, but I feel like there are so many skilled Canadians who are probably being overlooked and there has to be a better solution to address biases and also create efficiencies in the hiring, promotion and training process. And so I came together with a machine-learning scientist and an industrial organizational psychologist to essentially create Knockri.
What we have created is a solution that helps organizations reduce bias in hiring, increase diversity on the basis of merit, improve efficiency and also delivers a great candidate experience. And unlike some of the other AI hype at the moment, we have started our company from the ground up with a lot of guardrails in mind and have really ensured that each and every single decision can be traced to how it was actually made.
So we have essentially created a glass box of an algorithm that is transparent, auditable and builds a lot of trust. My cofounder is Faisal Ahmed, a machine-learning scientist from the University of Toronto, and our chief scientist is Dr. David Mayers, who specializes in selections and assessments.
Esteemed senators, today, of course, AI has the potential to either reinforce existing inequities or to help reduce them. From my observation, spanning over a decade of working with our customers across the private and the public sectors, I would just like to make four brief points, and then we can open up the floor to any questions.
I do have to admit, AI is increasingly connected to the international human right to work, and that is, of course, what we are currently seeing out in the market right now and some of the challenges that come along with that. But when AI is trained responsibly, it can actually uphold a lot of those human rights, including the international right to work as well.
Number three, just through a lot of our customers, as it comes to the private and public sector and the Armed Forces, we feel that transparency is actually key, and accountability is non-negotiable. And so building out solutions, as I said, that are auditable and that you can actually open up and truly understand how a decision is being made is tremendously important.
Lastly, a lot of this is directly tied to economic security and our national competitiveness. There is, of course, a challenge of bias in algorithms; however, at the same time, we need to have regulation that will not slow us down too much; so adoption is not there anymore, and we’re lagging behind.
In conclusion, of course, AI is moving at a dizzying pace right now, but we have seen similar transitions in the past. And if guided responsibly, AI can expand opportunities, reduce biases, strengthen economic security, and of course, uphold human rights.
I thank you again for having the opportunity to have me here, and I look forward to your questions.

Thank you, Mr. Ansari.

Chair, deputy chair, honourable senators, thank you for the invitation to appear before you in my individual capacity. My name is Audrey Tang, and my perspective today is shaped by my current work as Taiwan’s Cyber Ambassador; also my fellowship at the Oxford Institute for Ethics in AI, and my experience as Taiwan’s first Digital Minister.
During my tenure in day-to-day government, our mandate was not just to make people trust technology, it was to make digital institutions worthy of people’s trust.
Your study asks how AI affects human rights, economic security, vulnerable groups and the international right to work. I would like to offer one frame: AI is more than automation. It redistributes attention, authority and bargaining power, and in a democracy, such redistribution must be visible, contestable and co-governed.
Taiwan’s success against AI-generated scam ads showed that democracies need not choose between technocratic control and platform inaction. Citizens deliberated on the balance between fraud prevention and freedom of expression, and the same principle applied to work-affected people should help set the rules before the systems harden into infrastructure.
The right to work in the age of AI must include three practical rights: the right to learn, the right to know and the right to contest.
The right to learn means training before displacement, not after. Work is more than income, it is apprenticeship, belonging, care and dignity. The right to know means that when AI affects hiring, scheduling, promotion, benefits, education or public services, people should know that it is being used, who is accountable and whose data is shaping the decision. A black box decision should not be treated as due process.
The right to contest means affected people can challenge outcomes without needing a degree in computer science. Appeals must lead to repairs, like corrections, compensation, policy changes or retiring the system altogether.
This matters most for people already made vulnerable by existing systems. I’m thinking of Indigenous communities, migrant workers, people with disabilities, children, seniors, racialized communities and those underrepresented in labour and skills data. AI must not become a new way to extract knowledge without consent, or to score people without context, or to make exclusion more efficient.
At Oxford, my work in Civic AI translates the ethics of care into six governance questions: Are we hearing those closest to harm? Is someone named and accountable? Does the system work in context? Do those affected have recourse? Does it build solidarity rather than vendor lock-in? And does it know when to stop?
For high-impact AI, democracies should require decision traces, independent audits, accessible appeals, public incident reporting, worker and community co-governance, sunset clauses and procurement rules that avoid lock-in.
A democratic system must be interruptible, possible to pause, override or retire without disrupting essential services that people depend on.
Inclusive prosperity is also democratic security. Canada and Taiwan are both free and open societies. And we know our adversaries are testing our seams of trust, but sovereignty is not solitude. It’s the way to protect people and co-operate without surrendering public judgment.
To the familiar agenda of protecting, empowering, and building, I would like to add one verb, co-governing. Protect people from harm, empower them with skills and knowledge, build trustworthy public infrastructure and co-govern AI with the workers, families, communities and future generations. We will live with those consequences.
A good enough ancestor does not ask whether the machines are ready to replace humans; a good enough ancestor asks how machines can help humans care for one another, deliberate together and keep faith with those not yet born.
No one should be automated out of agency. No community’s knowledge, language or labour should be treated as raw material without consent. No worker should have to negotiate alone with a black box.
Thank you. I welcome your questions.

Thank you so much, both of you, for your very engaging presentations. I’m sure there will be lots of questions for you. We will begin.

Thank you both for being here. It has been really interesting.
Audrey, I listened to you on a podcast, and I thought that the hardest job you have here today is to synthesize everything you’ve done, because you’ve done a lot.
First, have you written a book?

Yes. The book is public domain, freely available online at plurality.net.

Awesome, thank you.
You really live the co-governing model. I’m wondering if you could give us a concrete example in Taiwan that you implemented to really co-govern around AI.

Certainly.
As I previously mentioned, in 2024, we convened what is called an alignment assembly to respond to the harms in generative AI caused by the scammer and fraudulent ads online. As people know, the deepfakes in 2024 were very prevalent in all democracies, but as Asia’s most free — internet freedom — country, Taiwan simply cannot do top-down censorship. Therefore, we sent SMS text messages to 200,000 random numbers around Taiwan, asking what we should do together. People chimed in with their ideas, and we chose 447 people in a mini-public, statistically the same as the wider polity, in tables of 10. They deliberated. The only simple rule is that AI only facilitates, and they have to convince the nine other people at the same table before their idea bubbles up.
Long story short, we implemented a set of ideas that more than 85% of people agreed with in the mini-public and that the other 15% can live with. Those ideas have joint liability, mandatory “know your customer” and slowing down connections by foreign platforms that do not adhere to our liability rules. Throughout 2025, the impersonation or deepfake ads were down by more than 90%, so I think this conclusively showed that when people want to show up at a table, the idea is not to do top-down control but rather to invent a bigger table.
Thank you.

Thank you. It just takes a lot longer.

It takes a long afternoon. It’s what is called a deliberative poll. It runs as long as a rigorous poll runs, which is usually a day or a few days.

Second, I thought you did such a good job of explaining the differences between big AI and specific forms of AI; I don’t know what the words are — general-use AI — and then the specific ones. From an energy-use perspective, could you describe that to us, please? It comes up over and over.

Certainly.
Currently, in AI training, what is called a general-purpose large model needs to anticipate pretty much every use, from folding our proteins to folding our laundry, in the same model. In doing so, it’s incredibly energy inefficient to train. However, when we know what we want the model to do, for example, folding the proteins or folding the laundry, then we can train what is called a domain-specific model or a local model that incorporates the local community’s input in such a way that also protects their data from extraction to the clouds of foreign big tech companies.
The one idea I will share with you is that extractive part — the very energy consuming part — you can think of its data as oil. This kind of extraction goes to some large refinery somewhere, but the local way to train the small models — we can think of the data as soil. The local community tends to it together, fine-tune it and continuously train it so that whenever there’s a bias or an error, the course correction is immediate instead of waiting for the energy-consuming run that would take half a year or something.

Thank you.

Audrey may have answered some of my questions, but I wanted to pose one to Mr. Ansari.
You mentioned four points. You mentioned that when AI is trained responsibly, it can uphold the standards, I think. You also mentioned transparency and accountability being key.
I was wondering if you could expand on those, and tell us how those two things apply within your business. How do you ensure that AI is being trained responsibly, and how do you ensure you’ve got transparency and accountability?

Thank you so much for the question.
I will share at a high level, because I’m not super technical; however, I will give you a good understanding of the solution.
At our organization, we are not utilizing any large-language models, so we have a lot of autonomy as to how to train the data. When we started the company, we objectively had taken a look at a competency. What does a growth mindset mean, very objectively? What does collaboration mean? What does agility mean? Based on those, taking the exact definitions, we mapped the world of work with the success that correlates in the workforce.
So, if I’m going through the process, it’s not being trained on historical and biased data. A lot of the vendors out there would just see very generally — and it’s happened quite a lot in the industry — is that they have trained the models off historical data. If there are existing biases in that, they will leak into your algorithm, which will further create data.
We have not taken a look at historical data. As an example, if a subgroup at an organization, which might be, let’s say, South Asian and male, are the highest performers in these tech jobs, and we have trained the algorithm just based on that, that will actually create a lot of challenges when you assess a White female, or another group.
We learned extremely early on that we can’t actually do it in that manner. That is number one.
Number two is that we have a diverse set of raters, as well, from a lot of different backgrounds and areas of knowledge who have scored these candidates, as well, just to verify if there’s validity among the two.
We’ve had to conduct a lot of studies with universities and various organizations because it is a very litigious area in terms of trying to support any decisions like this. Then, as it comes to the federal public sector, it is going through the wringer of the algorithmic impact assessment and the privacy impact assessment, and really taking a look at how these decisions are being derived.
Going through a lengthier process with the ISED team and then the PCO team and then the PSC team, I would say it has been years of just ensuring that it is auditable. That’s number one. It’s very transparent. Second, it is trained in a very objective fashion, as well. I hope that’s helpful.

Thank you to both of the witnesses.
For both of the witnesses, could you talk about the way you emphasize transparency and democratic participation while also remaining cognizant of sharing human rights information and protecting human rights information, particularly in a jurisdiction like ours where there are 13 or 14 jurisdictional components, which I think is different from Taiwan, but I will happily stand corrected?

My knowledge is not extremely deep when it comes to some of the jurisdictions. However, I can say that a lot of the folks who have gone through — so if I were to give you an example, we had a situation where we had a woman from the Black community. She was being overlooked for the longest time. She was in line for a promotion. She was just stuck there for about five years. They brought us in as part of a process and, after five years, she actually ended up getting promoted. She moved up and it showed us that when you build a solution that levels the playing field and actually assesses individuals based on merit, it upholds a lot of basic human rights, like that of having fair and equal opportunity and advancing in a career in a dignified fashion.
On top of that, senator, we don’t hold any information. We would delete all the data because it is not helpful for us anyway. Our function is to ask how we can help these individuals have a fair shot and an equal opportunity and, at the same time, give them the agency of not being tied down to an organization storing any of their data either.

If I may, I would like to make the distinction between data coalitions, that is to say people pooling data in a way that is useful to all of the members, versus the aggregation of data. It is possible for multiple players, stakeholders or communities to join a data coalition without sharing any of the raw data. There exists a kind of technology called zero-knowledge technology that allows people to prove that they can do something, or they are in possession of this knowledge, or that this community can respond to this kind of query, without revealing any of the personal, identifiable information underneath.
During the pandemic, in Taiwan, we used a privacy-preserving contact-tracing method. Basically, a venue prints a random number on the QR code on the front door, a person scans it and sends it to a well-known number, 1922, but the telecom knows nothing about what this random number means, and the venue learns nothing — not even the phone number of the visitor — and the state learns nothing whatsoever. If an infection happens, we can do contact tracing and use recursive notification, again, without sacrificing any of the privacy of the people who are not in the affected area. I hope this illustration shows a little bit of the flavour of how a zero-knowledge data-knowledge sharing arrangement can actually work.

Thank you to the presenters for your work. In the 2024 UN report Governing AI for Humanity, it states:
There is, today, a global governance deficit with respect to AI. Despite much discussion of ethics and principles, the patchwork of norms and institutions is still nascent and full of gaps. Accountability is often notable for its absence, including for deploying non-explainable AI systems that impact others.

It further states:
The development, deployment and use of such a technology cannot be left to the whims of markets alone. National governments and regional organizations will be crucial, but the very nature of the technology itself – transboundary in structure and application – necessitates a global approach.

They outlined guiding principles, and these principles acknowledge that AI governance does not take place in a vacuum and that international law, especially international human rights law, applies in relation to AI. Can you both speak to that?

I can start from my experience, of course, and then I will let my esteemed colleague take it afterward.
Senator McCallum, what we’ve seen in the interim of a standardized approach to regulation internationally as it comes to human rights is that a lot of the large organizations actually have ethics boards. They have these committees where each and every single solution is deeply analyzed and it is assessed. If there is any chance that it will create an adverse impact to any subgroup, that solution does not advance.
From my purview, I’ve yet to see something globally that has been adopted, of course, but a lot of the organizations have specific and concrete mandates. As I was saying, it is very litigious. A lot of companies have been sued already because these algorithms have created biases. That’s number one.
Number two is that, as it comes to Canada, as part of the federal mandate, we, as vendors or other algorithms, have to go through these algorithmic impact assessments. It is based on risk criteria regarding how critical your solution is when it comes to making a decision.
If it is used as part of the airforce and you are targeting a building, for example, that’s extremely high risk. That decision process has to be extremely well understood because it can create a lot of challenges. Our solution is very low risk because we are just doing decision support. There is always a human who will be reviewing it at the end of the day.
Regarding what I’ve witnessed internationally, there isn’t set guidance at the moment. The EU has an act and some of the American states have an act. However, Trump was trying to get rid of those. So we’ll see how that plays out.
I would say that, right now, the best plan of action, at least in Canada, is working with folks like Mark Schaan from ISED and understanding some of the regulations that they are setting out and making sure that every single vendor is actually legitimate.

If I may, I think the report’s diagnosis is right, that a patchwork of principles without enforceable duties would not govern AI, which is intrinsically a global phenomenon in technology. But I think democracies, including especially middle powers, should build interoperable governance, so it is not identical governance that applies everywhere the same globally, but rather auditing standards; instant reporting standards; provenance for synthetic media, as we just mentioned; and procurement requirements that avoid any kind of vendor lock-in. All these are like stacks that can be made to work across jurisdictions without harmonizing every domestic rule. I think Taiwan and Canada, as free and open societies, can be peers in that work.
The global governance deficit, I believe, will not be closed by another universal principle, but can be closed by enforceable duties that make principles contestable in each and every domain.

Thank you.

I’m sitting here completely blown away by both of your responses to questions. I don’t think we’ve heard such responses in the past, so thank you so much.
My question for you, Mr. Ansari, is: In light of the solutions to bias-free decisions that Knockri is working on, I’m wondering what the take-up is on the work or the benefits that you offer? And not the name, but what would a typical client look like in terms of the work you are doing?

As it particularly comes to our solution, I would say federal government departments, the air force, a lot of the Armed Forces. Then, in the private sector, large organizations as it comes to tech companies, consulting companies, banks, insurance sector, education sector, et cetera.
The value, of course, has been — it is from organization to organization, right? Some organizations have a mandate of increasing gender and racial diversity in short lists of candidates, of course. It is a very specific focus over there on how we can take a look at, firstly, the pipeline of talent and truly understand why certain individuals are not being screened through. Because what we have seen is that, similar to my experience, human intelligence is great, but there are challenges there as well. We are not perfect creatures, either.
So I would say our solution is used to increase gender and racial diversity in shortlisted candidates. A lot of organizations are utilizing it for efficiency now. Because you apply for a job, and it takes so long for some people to hear back. So this is allowing a lot of recruiters and HR teams to focus on the higher-value work of in-person conversations, ideally, and let the tedious work, where the biases can actually creep in, be handled by automation at that stage.
That’s what we have seen with a lot of our customers. They’ve been able to save a significant amount of time and cost, as it comes to that.
Recently, chair, I would say the solution is being utilized by the air force to create training efficiencies.
As I was saying, it is directly tied to economic security and national competitiveness. We are seeing that Canada is facing a lot of challenges when it comes to productivity and efficiency, and we all know that.
In addition, if I may just make this point, a lot of young Canadians are leaving the country, which is not a good sign. As an example, one of my childhood friends went to Florida. There are highly qualified individuals who are leaving the country, but we also have very qualified individuals who are here locally as well who don’t have the right jobs.
So we’re seeing it now being utilized to upskill individuals. It is being utilized in universities to upskill students and to make sure that they are matched with the correct competencies that the organizations are looking for. So their path to employment can be a lot faster as well.
There are a wide variety of things that it touches. Of course, as it comes to adoption, I will say that ChatGPT, of course, has accelerated it tremendously. With adoption, initially, there was a lot of fear around the technology and then the education came on some of the challenges and biases. Now I feel like a lot of the adoption is actually happening now.

Thank you so much.
Audrey, I don’t have a lot of time left, but I’m really intrigued by the governance model you talked about. You mentioned the importance of the people affected should be helping to set the rules, for example. You also mentioned that it is possible to be using AI in a way that is not about replacing but helping us to relate with each other and to help each other. Could you talk a little bit about that as well? I’m also interested in the take-up of the approach that you are using whether that be in Taiwan or in other jurisdictions.

Thank you. I would like to first make a distinction between an AI that automates intelligence — sometimes called “authoritarian intelligence” — that makes decisions on behalf of people. For example, 10 years ago on social media, many people felt their agency had been taken away, because previously, when we followed the same people, we would see the same feed. But 10 years ago, it was replaced by a very judgmental AI that personalized our feed and encouraged “engagement through enragement.” That’s very authoritarian.
In Taiwan, we call it “assistive intelligence,” a different kind of AI that assists the cross-conversation between people who would otherwise not agree. Instead of a wildfire that consumes the oxygen between people, think of it as a campfire that people sit around, and it illuminates the faces of people who are different from you. But still each campfire is tended by a bounded set of people, like 10 people or 100 people and so on, like a larger bonfire.
In Taiwan, we have designed this kind of prosocial media, such as Polis, which is an open-source technology that is being used by I think more than a dozen countries worldwide, including Canada. The idea is, instead of making the outrage viral, Polis makes the overlap viral. In order to be viral on this prosocial media, it highlights only the ideas that people who would otherwise never agree, agree on. So only the bridge-makers gain virality, and in doing so, people heal their polarization and so on.
Our demonstration was successful enough that even on traditional social media, like x.com — previously Twitter — they have now adopted a similar algorithm called Community Notes, which lets people who bridge across different ideologies write in there to add context to any viral misinformation or disinformation or just contentious information.
Now we work with all major social media companies on communicating those implementations and also collaborative notes, which are notes drafted by AI and then instantly corrected by humans so that AI can learn what can translate across communities, like between the climate justice community on one side and biblical creation care on the other, so they can translate across their vocabularies. I hope that illustrates a little bit of the bridging potential of such language model technologies.

Thank. That’s very interesting.

Thank you for joining us. Audrey, can you tell us if the minister of digital affairs in Taiwan has collaborated with the minister of agriculture to drive digital transformation in agriculture? On the previous panel, we heard from Farm Credit Canada, and they talked about the Canadian context of AI integration with agriculture. Can you provide insights during your time as minister of digital affairs on any intersection between the two ministries in Taiwan?

Definitely. We indeed worked with the new ministry of agriculture during my tenure. Both ministries were set up, more or less, in one year’s time. In Taiwan, of course, we also face the issue that there are less young people in the agricultural sector, and the professional know-how that they have in one particular domain does not readily translate to the other domain.
Mostly, we worked with the ministry of agriculture to make sure that environmental sensing was not limited to one production facility, ensuring the long-term trend of what crops to plant, how to hydrate, how to make sure that the customer relationship as well as the online relationship management, and so on. We have a program called TCloud, where each of the small- and medium-sized enterprises, some of which are in the agriculture sector, can choose between thousands of vendors.
The thing here is that we implement the same transparency and data portability and the freedom to move between different vendors so that the data stays with the agricultural operators so that if one prediction model or one vendor does not fit their particular circumstances any more, they are then free to shift to another vendor.
As an incentive for a vendor to join the program, we offered vendors up to 80% subsidy from the government to help them to bootstrap, especially the small- and medium-sized start-up vendors that produce software for the agricultural sector to introduce them to the agricultural sector. So the state subsidy here is not on one national champion or regional champion, but rather, to the idea of interoperability and the data sovereignty and ownership in the agricultural sector, so that they can collaboratively train their sector-specific models.
Now we are seeing the idea of data coalitions being taken up by other models as well. For example, the financial sector just announced that the banks and insurance companies are now using the same idea to train their sector-specific models without sacrificing their data sovereignty and ownership.

That’s fantastic. You mentioned that in Taiwan, you have an aging producer and farmer situation. Here in Canada, we are faced with that, and we are seeing a consolidation of farms being driven by the need to have economies of scale to stay in the business, have profitability and be able to survive in it.
What are the barriers in Taiwan to young people joining agriculture? How do you see AI possibly better facilitating or attracting them? We heard from our Farm Credit Canada folks about the reduction of risk through better predictability. Can you expand on your opinion on that?

Definitely. I would say that better predictability is a large part of it. In any kind of work, including agriculture, we must consider the “ABCD” that I just mentioned: apprenticeship, belonging, the idea that we can care for our communities, as well as dignity. These are equally important.
As we just mentioned, AI systems that help intergenerational solidarity, the ability for people’s know-how in one context to transmit to the other without sacrificing their local wisdom, without aggregating them as oil refineries or as soil: I think this is also very important.
Also, capital and land access, the difficulty of inheriting tacit knowledge from one generation to the next, the traditional dependence on a small number of intermediaries for market access, and so on, are solved not by even more consolidation but rather by small operators reaching cooperative scale through shared digital infrastructure. For example, our drone agricultural service platform brings together 90 small operators who share equipment, certify pilots and compliance records. None of them could individually afford the equipment or meet the regulatory burden, but together they reach this horizontal scaling that, previously, only large agribusinesses could. I would offer that the state’s role is not to pick national champions but to subsidize the freedom to choose. Thank you.

You can answer this question in writing if you want. How can the federal government best support Indigenous data sovereignty? Audrey?

Yes. Taiwan has 16 Indigenous nations and more than 42 language variations. So we see the cultural and also transcultural — the ability to translate across culture — sovereignty as very important. When we say “sovereign AI” in Taiwan, we don’t mean just a national Taiwan model that speaks Mandarin, Taigi, Hakka and other Indigenous languages. We mean a reproducible process for the language communities to own their social and cultural composition of the data that’s curated within those language communities, as well as alignment assemblies, the ways for people to draw boundaries around how AI should enter their community almost like a code of conduct for AI agents.
These two, together, allow each community to feel that they own their own social-cultural determination when it comes to language model training, but also incorporate that transcultural translation capability so that when one language or culture gains a certain capability, one can readily cherrypick it into the community if the community so feels, but the agency and sovereignty are held by the Indigenous community, not by the top-down national commission. I hope I’m making sense. I am happy to add more in writing, but that is the general idea.

In writing would be excellent. Thank you so much.

Mr. Ansari, did you want to add anything?

There is one last point I would like to add because I feel the conversation about transparency was somewhat overlooked. We’ve observed, actually, that individuals, especially from groups that are susceptible, really appreciate when you share why you’re using an AI solution. That’s because there is a lot of fear around it generally, and we’ve seen that, as soon as we start to communicate that, we are helping to ensure that each and every single person has a fair shot and an equal opportunity, and that has changed the entire dynamic of the conversation. So if you were to employ some AI solutions, I feel like it shouldn’t just be software. You need this behavioural and human understanding to also get them to understand why it’s being done and what the benefits are. Otherwise, they’re going to be very scared about it, and they will be very untrusting of the public sector.

Thank you very much. Thank you for that last point.
I feel hopeful after hearing both of you, and your responses were incredible. To be hearing things like sovereignty is not solitude and talk about inclusive prosperity, and how to use AI in a way that actually reduces barriers and biases, I think. I can speak for myself; I’ve been hoping to hear that. So thank you very much.
This brings us to the end of our panel and thank you both for contributing to this work that we’re doing. It’s been very helpful.