The urge to weaponize artificial intelligence for social impact can feel like a modern magic trick. A dataset, a clever model, and a faster, louder claim that we can fix a stubborn injustice. Yet the Human Trafficking landscape reveals a profound risk: treating a tangled social wound as if it were merely a solvable data problem tends to misread pain, replicate power imbalances, and in some cases deepen harm. When people who have survived abuse are reduced to signals in a classifier, the machine learns not compassion but category mistakes. The result can be surveillance that harms the very communities it intends to protect.
In Montreal, a team from McGill University and the Mila Quebec AI Institute decided to flip the script. They asked not how to build a better detector, but whether we should build at all. Their answer is Radical Questioning, a five step upstream ethics framework designed to intervene before design begins. The project is a collaboration among McGill and Mila researchers, with leadership shared by Pratheeksha Nair and Reihaneh Rabbany, and contributors including Gabriel Lefebvre, Sophia Garrel, and Maryam Molamohammadi in Montreal. The aim is to create space for a deliberative, relational form of responsibility that values survivor wellbeing, community voices, and the political realities that data alone cannot capture. They do not promise a universal recipe; they offer a disciplined way to pause, question, and reframe what counts as a problem worth solving with AI.
What makes this approach feel urgent is not that it rejects ethics, but that it relocates ethics to the very start of a project. Traditional ethics in AI often comes after feasibility is settled, functioning like a gatekeeper checking boxes once a design is already on the drawing board. Radical Questioning, by design, challenges that order. It treats the question of whether to intervene as a political and moral inquiry as much as a technical one. If the answer is no, the framework asks you to walk away. If the answer is yes, it pushes you to reimagine what a helpful intervention would look like. In HT, where the stakes involve safety, dignity, and autonomy, this upstream moment can be the difference between harm and healing.
Upstream ethics becomes not a ritual but a reflex, a habit of mind that wanders through power, consent, and legitimacy before any code is written. And because the framework was crafted in the crucible of survivor-led collaboration, it centers relational accountability over abstract principles. The result, the authors argue, is a shift from chasing detection to honoring survivor autonomy, from surveillance to support. What follows is a map of their five radical questions and what they look like in practice, illustrated by the HT case and enabled by a community of researchers committed to survivor-centered design.
Rethinking AI for Good in HT
Technology has a habit of narrating social problems as if they were single variables waiting for a cleaner data signal. In human trafficking, the canonical image has often been a data problem in need of a clever classifier: scan ads, connect dots, predict where trafficking occurs, and intervene through law enforcement or NGO programs. But the authors show how this lens can flatten lived experience and obscure the social, legal, and cultural textures that sustain exploitation. The paper frames AI tools as potentially powerful but morally dangerous instruments when they are deployed without a deep, critical look at who gains and who pays the cost. It is not anti technology; it is anti quick fix dressed up as reform.
The Radical Questioning framework is not a substitute for ethics; it is an upstream companion that asks the harder question first: should we even build this tool at all? If the answer is yes, it then asks a set of structured, deliberative questions designed to surface assumptions, map power, and anticipate harms before any design work begins. The authors describe RQ as a five step process that can be generalized beyond HT, though its questions must be tailored to each context. In their view, the framework stabilizes ethics as a daily forma mentis for developers, researchers, policymakers, and survivors alike. The aim is not to produce a better product but to create space for deliberate, relational responsibility that traps neither intention nor outcome in a one dimensional story.
What makes this approach feel different is its insistence on pause as a design principle. The researchers argue that the path from problem framing to product can inadvertently reproduce harm if it proceeds without scrutiny of social meanings, power distributions, and potential for retraumatization. HT is a hyper contested space where definitions of exploitation shift with laws, cultures, and individual stories. RQ invites teams to surface those tensions early, to test whether an AI intervention is legitimate at all, and to reframe success in terms of survivor empowerment, safety, and dignity rather than mere detection rates. This is ethical design as a stubbornly human practice, not a sterile compliance exercise.
The Five Steps in Action
Step 1 asks a deceptively simple question: what is the scope of the problem and who gets to define it? In HT, the conventional framing often reduces the problem to online escort ads and automated detection. The authors push back by asking why the problem is defined in this particular way, who benefits from this framing, and whether AI is even the right tool for the job. The critical takeaway is that the initial problem statement is not neutral. It encodes power relations between surveillance systems, criminal justice, service providers, and the people most affected. In practice, this step can derail a project before lines of code are drawn by revealing that a narrow computational objective may reproduce gendered and racialized harms if taken at face value.
Step 2 turns the lens to stakeholders. AI work tends to privilege institutional voices—police, funders, researchers—while marginalizing survivors and frontline organizations. The HT case shows how survivor groups and sex worker organizations can radically reshape design objectives when given real decision making power. Engaging survivors directly, not merely as informants or proxies, changes what success looks like. It also surfaces practical challenges, such as building trust in contexts where communities have reason to distrust institutions. The emphasis on inclusive participation is not a courtesy; it is a design constraint that keeps harm from hiding in the gaps between data and policy.
Step 3 digs into contextual nuance. Even with broad participation, there is no universal contract of consent, justice, or exploitation. Some survivors want different kinds of help, others fear the criminal justice system more than their traffickers, and many definitions shift with local legal cultures. Importantly, the step warns against simplistic risk factor lists that can induce biased surveillance. By foregrounding nuance, RQ reframes success not as catching traffickers but as enabling survivors to pursue justice on their own terms. The framework thus guards against the chilling effect that AI surveillance can produce, particularly in communities already vulnerable to policing and stigma.
Step 4 maps ethical concerns. Accountability, privacy, fairness, legitimacy, and incentives all come into play, but not in a vacuum. The paper highlights how ground truths about exploitation are legal and social constructions that evolve with time. AI systems trained on fixed notions risk entrenching those biases, while generic legal compliance does not guarantee legitimacy in the court of public trust. The juxtaposition of law and social legitimacy matters: a tool could be lawful yet deeply unwelcome, or privacy rules could permit uses that communities still experience as harmful. This step invites teams to specify who bears responsibility for what and to interrogate whether simple compliance is enough to ground a trusted intervention.
Step 5 is about iteration with feedback, but not in a perfunctory way. Feedback must be deliberative, trauma-informed, and capable of halting a project if harms outweigh benefits. The HT team conducted survivor-led workshops and set up an advisory board to oversee ongoing development. The pivot from an automated detection tool to a survivor-centered evidence management platform illustrates what it means to let critique steer a project. It is not a graceful tweak but a reimagining of the product itself, a pivot from surveillance oriented toward empowerment and documentation oriented toward autonomy. In short, RQ trains teams to listen first and to be prepared to walk away when necessary.
Why This Matters and Where It Goes Next
The most striking implication of Radical Questioning is not just a different product but a different posture toward AI research. In HT, the team demonstrates that the right question at the outset can prevent harm and reveal humane pathways that a rushed project would miss. The final outcome in their case is a survivor centered evidence management tool rather than a blunt detector of trafficking. That shift embodies a larger truth: ethics is not a garnish for success but a condition for legitimate innovation. If the aim is to serve communities that have been marginalized or criminalized, the design space must start with the people most affected and with a candid assessment of who is empowered by, and who is harmed by, a given intervention.
Although HT provided the proving ground, the authors insist that the five step structure is domain general. The same upstream deliberations could guide AI work in child welfare, housing, or predictive policing cases where definitions are contested and data risk amplifying systemic injustices. The idea is not to eliminate technology but to put a brake on techno solutionism that treats social problems as simply solvable with better models. By insisting on relational accountability and on the option to stop, RQ reframes ethics as ongoing, collaborative practice rather than a one off risk assessment.
There are real caveats. RQ is not a plug and play checklist. It is a scaffold that requires time, trust, and access to diverse voices—something that can be hard to secure in fast moving settings or in environments resistant to survivor participation. The authors acknowledge that the framework depends on the openness of the development team to critique, and on institutions willing to fund genuine engagement rather than performative consultation. Yet the core ambition remains compelling: design that begins with the question of whether we should build, and only then moves on to how we might build in ways that honor and protect those most at risk.
From a broader vantage, Radical Questioning invites a cultural shift inside research communities. It encourages us to see ethics not as a hurdle to clear after a prototype exists, but as a living conversation that travels with the project from dawn to deployment. It also raises practical questions for policymakers and funders: should grants reward not only measurable impact but also the courage to decline projects that would cause more harm than good? Should survivor advisory boards become a standard feature of high stakes AI initiatives? The Montreal based team suggests yes, and their work provides a concrete blueprint for making that a reality.
In the end, the paper is more than a framework; it is a manifesto for responsibility in an era when AI can move faster than our institutions. It asks teams to be meticulous about motives, to respect the complexities of human lives, and to accept that sometimes the responsible choice is not to build at all. The five steps form a loop of reflection that can outpace hype, turning technical prowess into social prudence. And crucially, it locates accountability in human relationships—survivors, communities, researchers—rather than in abstract metrics alone.
The Radical Questioning framework thus stands as a reminder that the most transformative AI projects may be the ones that decide not to rush to a product, but to slow down, listen, and reimagine what it means to help. In a world hungry for scalable solutions, that pause can be the rarest and most humane form of progress. The study points to Montreal as a hub where rigorous ethics and compassionate design intersect, offering a hopeful path for AI that earns its trust by asking the right radical questions first.