Your data rights may expose more privacy risk than you knew

In a world where a handful of companies quietly collect and sell your personal signals, California’s data broker registry reads like a public ledger of a hidden economy. The registry promises transparency, but what happens when you actually try to see what a broker knows about you? That question sits at the heart of a fresh study from the University of California, Irvine, led by Elina van Kempen with colleagues Isita Bagayatkar, Pavel Frolikov, Chloe Georgiou, and Gene Tsudik. It’s a surprisingly human project: researchers acting as ordinary residents, trying to claim the right to know what data brokers hold about them. The answer is as much about the friction of rights as it is about the data itself.

The study set out to test a basic premise enshrined in the California Consumer Privacy Act (CCPA): consumers should be able to request access to their data—Verifiable Consumer Requests, or VCRs—and receive a usable copy of what brokers know. The twist is that this right exists in a world where brokers are often invisible, operate with opaque processes, and sometimes seem barely accountable to a law that should empower individuals. By attempting to exercise these rights against all 543 data brokers registered in California, the UC Irvine team shines a stark light on how the system actually behaves when the rubber meets the road. And the results are as instructive as they are unsettling: a large fraction of brokers don’t respond at all, the verification steps can demand new kinds of personal information, and the delivery of data—when it happens—varies wildly in format and quality.

What makes the findings especially urgent is who is behind the work. The UC Irvine team, anchored by van Kempen and supported by collaborators Isita Bagayatkar, Pavel Frolikov, Chloe Georgiou, and Gene Tsudik, treats the registry as a real-world stress test for privacy protections, not a theoretical exercise. The study’s depth comes from its scale (all 543 registered brokers), its method (systematically submitting VCRs using each broker’s own channels), and its insistence on reporting every nuance—from time-to-submit to the exact PII elements requested for identity verification. It’s the kind of work that makes you rethink what a “consumer right” looks like in practice, not just on paper.

The unseen data economy that personalizes your life

Data brokers operate like a shadowy library, one that doesn’t require you to borrow a book so much as to publish a life. They collect fragments from public records, social feeds, payments, app activity, and even other brokers, then stitch them into dossiers that can be sold to advertisers, insurers, recruiters, or anyone else willing to pay. The end product is not a single profile so much as a mosaic of inferences: spending habits, health indicators, or even the likelihood you might move to a different neighborhood. The paper makes clear a crucial distinction: unlike banks or credit agencies with whom you have at least a relationship, data brokers often accumulate PI about you without your consent or even your knowledge. The result is a marketplace of data about you that you didn’t authorize and may not fully understand.

That matters because privacy isn’t simply about keeping information secret. It’s about who gets to see you as you actually are, and how those views of you shape decisions about credit, employment, insurance, and even social opportunities. If a broker’s data or inferences reach buyers without your awareness, you can be judged, priced, or steered in ways you never anticipated. The paper notes that many data brokers operate without a direct consumer relationship, which means most people only discover they exist when a third party uses arcanely tailored information to reach them, often via targeted ads or pre-screened offers.

In the study’s own words, exercising a privacy right should feel empowering, but the process can paradoxically create new privacy risks. The authors emphasize that rights to know or delete are meaningful only if the path to exercise them is clear, standardized, and trustworthy. Without that foundation, the very act of trying to protect oneself can expose more data than it reveals, a problem the paper frames as a privacy paradox that policymakers and regulators must urgently address.

A thorough audit uncovers the cracks in compliance

The heart of the paper lies in an audacious experiment: submitting VCRs to every California-registered data broker and watching what happened. The authors—bearing the weight of an institutional review board decision that this was non-human subject research—went to work with a single California resident as their stand‑in consumer. They navigated each broker’s preferred submission method, whether a form, an email, or, in a few stubborn cases, a phone line. The goal was to measure not just whether brokers would answer, but how long they’d take, what information they required to verify identity, and what data, if any, they would hand over in response.

The scale is striking: after filtering out duplicates, inaccessible sites, or brokers lacking privacy policies, the study focused on 454 brokers, about 84% of the total in the registry. And the verdict was sobering. Only a bit more than half—about 57%—responded to the VCRs at all. More concerning, 43% did not respond within the study period. In other words, nearly half of the brokers registered to do business in California either ignored the requests or failed to operate within the law’s timeline. The 45‑day clock, mandated by CCPA, is supposed to set a firm boundary for responses; yet a modest share of brokers stretched beyond that window, while others never replied in any meaningful way.

When brokers did respond, the nature of that response varied wildly. A majority of responding brokers indicated that they had no PI on the researcher, which is technically compliant in the sense that they can truthfully say they do not possess data. Yet a substantial minority did provide information—some of it highly sensitive. The study found 22 brokers that did provide PI about the researcher, delivered through channels ranging from postal mail to OneTrust portals and direct website downloads. The content spanned a spectrum from publicly available professional data to far more sensitive items such as credit reports or car insurance records. The delivery formats were mostly PDFs, CSVs, or downloadable links, with one broker actually sending a protected postal mail packet containing sensitive data. The act of providing PI, when it happened, often arrived with caveats and questions about the data’s accuracy.

The methodology also revealed a landscape of operational chaos that undermines user trust. There was no universal, standardized process for submitting VCRs. Some brokers used OneTrust—the popular compliance platform—so forms looked familiar but were not interchangeable. Others relied on bespoke forms, or simply a plain email. A handful of brokers required a notarized affidavit or a chain of identity checks that could feel like a gauntlet, not a safeguard. The study even recorded cases where forms were broken or links led nowhere, forcing researchers to improvise by email or phone. The hands-on effort totaled nearly 10 hours spent submitting thousands of data points to 454 brokers, a reminder that rights without a streamlined path are frustrating at best and dangerous at worst when you’re asked to disclose sensitive identifiers to prove you exist.

Behind the scenes, the data reveal a surprising rigidity around the verification step. In most forms, the requester’s identity had to be verified using a set of data points—often just an email and a name, sometimes a home address or a phone number, and in the more stringent cases, a signature under oath or a government ID. The study quantified these patterns: 95% of brokers asked for an email address and a name; 42% asked for home addresses; about a quarter asked for a phone number. A handful demanded highly sensitive items such as a full SSN or a government-issued ID. In many cases, the form-VCRs used by brokers powered by OneTrust required additional steps like CAPTCHA challenges and email confirmations before you could even access the data portal. The result is a patchwork of submission experiences that makes the right seem less like a right and more like a scavenger hunt for your own information.

The audit also captured a practical, almost human irony: to obtain data about you, you must first surrender data to prove you are you. The study cataloged this privacy paradox in stark terms. In some instances, brokers asked for information they themselves did not hold or were unable to verify the data’s accuracy. In others, responses claimed the researcher was not a California resident—despite the request being filed from a California IP and context. The mismatch between the law’s intent and the real-world workflow is not a minor flaw; it’s a structural fault line in how privacy rights function on the ground.

The privacy paradox and what to do about it

Rethinking privacy rights means reckoning with the ways even well-meaning processes can backfire. The study documents an acute version of this: the act of exercising one’s rights can itself create new exposure. The researcher was compelled to share PII with hundreds of brokers—some of it highly sensitive—before even seeing what data might exist about them. In a few cases, brokers claimed they had no PI about the researcher, even after receiving the request and the verification information. In others, they mailed the PI to a home address, a tactic that seems to minimize online leakage but simultaneously expands the exposure surface to include the researcher’s postal data and potentially others who live in the same household. The authors call this phenomenon a privacy paradox because the mechanism designed to protect individuals ends up requiring them to reveal more private information to opaque entities that may not meet basic accountability standards.

The paper’s conclusions are not merely diagnostic; they’re prescriptive. The authors argue for standardized, transparent, and auditable procedures for all VCR submissions. They call for a clear cap on what data brokers may request to verify identity, and for enforceable, routine third-party audits to ensure compliance. They also urge the CPPA and similar agencies to publish a public grievances mechanism, so people can reliably flag non-responsive brokers and push for redress. In short, the registry should function as a lever for accountability, not a ritual that checks a box while leaving a room full of unanswered questions behind it.

What does this mean for you, the reader who might someday request their data? It suggests a cautious, informed path: expect a nonuniform process, brace for requests that feel intrusive, and still push for standardized, rights-centered design in the regulatory regime. It also nudges policymakers toward concrete reforms—standardized submission channels, clearer rules around identity verification, and regular, independent checks that brokers actually honor requests on time and with usable data. The study doesn’t pretend to have all the answers, but it does offer a roadmap for turning a political promise into a practical protection.

For the record, the UC Irvine team makes no bones about the human dimension of the work: the registry exists to empower ordinary people, yet the study shows that the channel to empowerment can feel like a bureaucratic obstacle course. The lead author, Elina van Kempen, together with coauthors Isita Bagayatkar, Pavel Frolikov, Chloe Georgiou, and Gene Tsudik, model a way forward that centers both accountability and user experience. It’s a reminder that privacy technology isn’t only about how data is collected and sold, but about how laws translate into everyday interactions—how people can actually stand up and say, with clarity and confidence, “this is mine.”

In the end, the study taken as a whole is a call to action. Data brokers should not merely register; they should demonstrate, in public, that they can handle requests with integrity and speed. California’s registry can be a model—if it is coupled with real enforcement, standardization, and transparent reporting. The paper’s authors hope for a future where exercising your rights doesn’t require negotiating through dozens of staggered interfaces, where your data is delivered in a portable, readable format, and where sensitive information isn’t requisitioned as a toll to see what’s already yours. It’s a future that remains within reach, provided policymakers, brokers, and consumers alike insist on clearer standards, robust audits, and a shared commitment to privacy that survives the first test of reality.

Lead author and UC Irvine researchers Elina van Kempen, Isita Bagayatkar, Pavel Frolikov, Chloe Georgiou, and Gene Tsudik contributed to the study, which examines the California Data Broker Registry and the compliance of all 543 registered data brokers with the CCPA right to know. The work underscores that protecting privacy is an ongoing practice—one that requires constant attention, better design, and stronger oversight so the right to know becomes a real, usable right rather than a bureaucratic gauntlet.