Publishings Digital Citizen
Program Areas
-
Press Release
Streaming Television Industry Conducting Vast Surveillance of Viewers, Targeting Them with Manipulative AI-driven Ad Tactics, Says New Report
Digital Privacy and Consumer Protection Group Calls on FTC, FCC and California Regulators to Investigate Connected TV Practices
Streaming Television Industry Conducting Vast Surveillance of Viewers, Targeting Them with Manipulative AI-driven Ad Tactics, Says New Report.Digital Privacy and Consumer Protection Group Calls on FTC, FCC and California Regulators to Investigate Connected TV PracticesContact: Jeff Chester, 202-494-7100 Jeff@democraticmedia.orgOctober 7, 2024Washington, DC. The Connected TV (CTV) video streaming industry in the U.S. operates a massive data-driven surveillance apparatus that has transformed the television set into a sophisticated monitoring, tracking and targeting device, according to a new report from the Center for Digital Democracy (CDD). How TV Watches Us: Commercial Surveillance in the Streaming Era documents how CTV captures and harvests information on individuals and families through a sophisticated and expansive commercial surveillance system, deliberately incorporating many of the data-gathering, monitoring, and targeting practices that have long undermined privacy and consumer protection online.The report highlights a number of recent trends that are key to understanding today’s connected TV operations:Leading streaming video programming networks, CTV device companies and “smart” TV manufacturers, allied with many of the country’s most powerful data brokers, are creating extensive digital dossiers on viewers based on a person’s identity information, viewing choices, purchasing patterns, and thousands of online and offline behaviors.So-called FAST channels (Free Advertiser-Supported TV)—such as Tubi, Pluto TV, and many others—are now ubiquitous on CTV, and a key part of the industry’s strategy to monetize viewer data and target them with sophisticated new forms of interactive marketing.Comcast/NBCU, Disney, Amazon, Roku, LG and other CTV companies operate cutting-edge advertising technologies that gather, analyze and then target consumers with ads, delivering them to households in milliseconds. CTV has unleashed a powerful arsenal of interactive advertising techniques, including virtual product placement inserted into programming and altered in real time. Generative AI enables marketers to produce thousands of instantaneous “hypertargeted variations” personalized for individual viewers. Surveillance has been built directly into television sets, with major manufacturers’ “smart TVs” deploying automatic content recognition (ACR) and other monitoring software to capture “an extensive, highly granular, and intimate amount of information that, when combined with contemporary identity technologies, enables tracking and ad targeting at the individual viewer level,” the report explains.Connected television is now integrated with online shopping services and offline retail outlets, creating a seamless commercial and entertainment culture through a number of techniques, including what the industry calls “shoppable ad formats” incorporated into programming and designed to prompt viewers to “purchase their favorite items without disrupting their viewing experience,” according to industry materials.The report profiles major players in the connected TV industry, along with the wide range of technologies they use to monitor and target viewers. For example:Comcast’s NBCUniversal division has developed its own data-driven ad-targeting system called “One Platform Total Audience.” It powers NBCU’s “streaming activation” of consumers targeted across “300 end points,” including their streaming video programming and mobile phone use. Advertisers can use the “machine learning and predictive analytics” capabilities of One Platform, including its “vast… first-party identity spine” that can be coupled with their own data sets “to better reach the consumers who matter most to brands.” NBCU’s “Identity graph houses more than 200 million individuals 18+, more than 90 million households, and more than 3,000 behavioral attributes” that can be accessed for strategic audience targeting.”The Walt Disney Company has developed a state-of the-art big-data and advertising system for its video operations, including through Disney+ and its “kids” content. Its materials promise to “leverage streaming behavior to build brand affinity and reward viewers” using tools such as the “Disney Audience Graph—consisting of millions of households, CTV and digital device IDs… continually refined and enhanced based on the numerous ways Disney connects with consumers daily.” The company claims that its ID Graph incorporates 110 million households and 260 million device IDs that can be targeted for advertising using “proprietary” and “precision” advertising categories “built from 100,000 [data] attributes.”Set manufacturer Samsung TV promises advertisers a wealth of data to reach their targets, deploying a variety of surveillance tools, including an ACR technology system that “identifies what viewers are watching on their TV on a regular basis,” and gathers data from a spectrum of channels, including “Linear TV, Linear Ads, Video Games, and Video on Demand.” It can also determine which viewers are watching television in English, Spanish, or other languages, and the specific kinds of devices that are connected to the set in each home.“The transformation of television in the digital era has taken place over the last several years largely under the radar of policymakers and the public, even as concerns about internet privacy and social media have received extensive media coverage,” the report explains. “The U.S. CTV streaming business has deliberately incorporated many of the data-surveillance marketing practices that have long undermined privacy and consumer protection in the ‘older’ online world of social media, search engines, mobile phones and video services such as YouTube.” The industry’s self-regulatory regimes are highly inadequate, the report authors argue. “Millions of Americans are being forced to accept unfair terms in order to access video programming, which threatens their privacy and may also narrow what information they access—including the quality of the content itself. Only those who can afford to pay are able to ‘opt out’ of seeing most of the ads—although much of their data will still be gathered.”The massive surveillance and targeting practices of today’s contemporary connected TV industry raise a number of concerns, the report explains. For example, during this election year, CTV has become the fastest growing medium for political ads. “Political campaigns are taking advantage of the full spectrum of ad-tech, identity, data analysis, monitoring and tracking tools deployed by major brands.” While these tools are no doubt a boon to campaigns, they also make it easy for candidates and other political actors to “run covert personalized campaigns, integrating detailed information about viewing behaviors, along with a host of additional (and often sensitive) data about a voter’s political orientations, personal interests, purchasing patterns, and emotional states. With no transparency or oversight,” the authors warn, “these practices could unleash millions of personalized, manipulative and highly targeted political ads, spread disinformation, and further exacerbate the political polarization that threatens a healthy democratic culture in the U.S.”“CTV has become a privacy nightmare for viewers,” explained report co-author Jeff Chester, who is the executive director of CDD. “It is now a core asset for the vast system of digital surveillance that shapes most of our online experiences. Not only does CTV operate in ways that are unfair to consumers, it is also putting them and their families at risk as it gathers and uses sensitive data about health, children, race and political interests,” Chester noted. “Regulation is urgently needed to protect the public from constantly expanding and unfair data collection and marketing practices,” he said, “as well as to ensure a competitive, diverse and equitable marketplace for programmers.”“Policy makers, scholars, and advocates need to pay close attention to the changes taking place in today’s 21st century television industry,” argued report co-author Kathryn C. Montgomery, Ph.D. “In addition to calling for strong consumer and privacy safeguards,” she urged, “we should seize this opportunity to re-envision the power and potential of the television medium and to create a policy framework for connected TV that will enable it to do more than serve the needs of advertisers. Our future television system in the United States should support and sustain a healthy news and information sector, promote civic engagement, and enable a diversity of creative expression to flourish.”CDD is submitting letters today to the chairs of the FTC and FCC, as well as the California Attorney General and the California Privacy Protection Agency, calling on policymakers to address the report’s findings and implement effective regulations for the CTV industry.CDD’s mission is to ensure that digital technologies serve and strengthen democratic values, institutions and processes. CDD strives to safeguard privacy and civil and human rights, as well as to advance equity, fairness, and community --30-- -
Press Release
Statement Regarding the FTC 6(b) Study on Data Practices of Social Media and Video Streaming Services
“A Look Behind the Screens Examining the Data Practices of Social Media and Video Streaming Services”
Center for Digital DemocracyWashington, DCContact: Katharina Kopp, kkopp@democraticmedia.org Statement Regarding the FTC 6(b) Study on Data Practices of Social Media and Video Streaming Services -“A Look Behind the Screens Examining the Data Practices of Social Media and Video Streaming Services”The following statement can be attributed to Katharina Kopp, Ph.D., Deputy Director,Center for Digital Democracy:The Center for Digital Democracy welcomes the release of the FTC’s 6(b) study on social media and video streaming providers’ data practices and it evidence-based recommendations.In 2019, Fairplay (then the Campaign for a Commercial-Free Childhood (CCFC)), the Center for Digital Democracy (CDD), and 27 other organizations, and their attorneys at Georgetown Law’s Institute for Public Representation urged the Commission to use its 6(b) authority to better understand how tech companies collect and use data from children.The report’s findings show that social media and video streaming providers’ s business model produces an insatiable hunger for data about people. These companies create a vast surveillance apparatus sweeping up personal data and creating an inescapable matrix of AI applications. These data practices lead to numerous well-documented harms, particularly for children and teens. These harms include manipulation and exploitation, loss of autonomy, discrimination, hate speech and disinformation, the undermining of democratic institutions, and most importantly, the pervasive mental health crisis among the youth.The FTC's call for comprehensive privacy legislation is crucial in curbing the harmful business model of Big Tech. We support the FTC’s recommendation to better protect teens but call, specifically, for a ban on targeted advertising to do so. We strongly agree with the FTC that companies should be prohibited from exploiting young people's personal information, weaponizing AI and algorithms against them, and using their data to foster addiction to streaming videos.That is why we urge this Congress to pass COPPA 2.0 and KOSA which will compel Big Tech companies to acknowledge the presence of children and teenagers on their platforms and uphold accountability. The responsibility for rectifying the flaws in their data-driven business model rests with Big Tech, and we express our appreciation to the FTC for highlighting this important fact. ________________The Center for Digital Democracy is a public interest research and advocacy organization, established in 2001, which works on behalf of citizens, consumers, communities, and youth to protect and expand privacy, digital rights, and data justice. CDD’s predecessor, the Center for Media Education, lead the campaign for the passage of COPPA over 25 years ago in 1998. -
Press Release
Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teens
Letter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data
Contact:David Monahan, Fairplay: david@fairplayforkids.orgKatharina Kopp, Center for Digital Democracy: kkopp@democraticmedia.org Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teensLetter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data BOSTON/ WASHINGTON DC–June 13, 2023– A coalition of leading advocacy organizations is standing up today to support the Federal Trade Commission’s recent order reining in Meta’s abusive practices aimed at kids and teens. Thirty-one groups, led by the Center for Digital Democracy, the Electronic Privacy Information Center (EPIC), Fairplay, and U.S. PIRG, sent a letter to the FTC saying “Meta has violated the law and its consent decrees with the Commission repeatedly and flagrantly for over a decade, putting the privacy of all users at risk. In particular, we support the proposal to prohibit Meta from profiting from the data of children and teens under 18. This measure is justified by Meta’s repeated offenses involving the personal data of minors and by the unique and alarming risks its practices pose to children and teens.” Comments from advocates: Katharina Kopp, Director of Policy, Center for Digital Democracy:“The FTC is fully justified to propose the modifications of Meta’s consent decree and to require it to stop profiting from the data it gathers on children and teens. There are three key reasons why. First, due to their developmental vulnerabilities, minors are uniquely harmed by Meta’s failure to comply repeatedly with its 2012 and 2020 settlements with the FTC, including its non-compliance with the federal children’s privacy law (COPPA); two, because Meta has failed for many years to even comply with the procedural safeguards required by the Commission, it is now time for structural remedies that will make it less likely that Meta can again disregard the terms of the consent decree; and three, the FTC must affirm its credibility and that of the rule of law and ensure that tech giants cannot evade regulation and meaningful accountability.” John Davisson, Director of Litigation, Electronic Privacy Information Center (EPIC): "Meta has had two decades to clean up its privacy practices after many FTC warnings, but consistently chose not to. That's not 'tak[ing] the problem seriously,' as Meta claims—that's lawlessness. The FTC was right to take decisive action to protect Meta's most vulnerable users and ban Meta from profiting off kids and teens. It's no surprise to see Meta balk at the legal consequences of its many privacy violations, but this action is well within the Commission's power to take.” Haley Hinkle, Policy Counsel, Fairplay: “Meta has been under the FTC's supervision in this case for over a decade now and has had countless opportunities to put user privacy over profit. The Commission's message that you cannot monetize minors' data if you can't or won't protect them is urgent and necessary in light of these repeated failures to follow the law. Kids and teens are uniquely vulnerable to the harms that result from Meta’s failure to run an effective privacy program, and they can’t wait for change any longer.” R.J. Cross, Director of U.S. PIRG’s Don’t Sell My Data campaign: “The business model of social media is a recipe for unhappiness. We’re all fed content about what we should like and how we should look, conveniently presented alongside products that will fix whatever problem with our lives the algorithm has just helped us discover. That’s a hard message to hear day in and day out, especially when you’re a teen. We’re damaging the self-confidence of some of our most impressionable citizens in the name of shopping. It’s absurd. It’s time to short circuit the business model.” ### -
The Honorable Joseph R. BidenPresident of the United StatesThe White House1600 Pennsylvania Avenue NWWashington, DC 20500May 23, 2023Dear President Biden:The undersigned civil rights, consumer protection, and other civil society organizations write to express concern about digital trade negotiations underway as part of the proposed Indo-Pacific Economic Framework (IPEF).Civil society advocates and officials within your own administration have raised increasing concern about discrimination, racial disparities, and inequities that may be “baked into” the algorithms that make decisions about access to jobs and housing, health care, prison sentencing, educational opportunity, insurance rates and lending, deployment of police resources, and much more. To address these injustices, we have advocated for anti-discrimination protections and algorithmic transparency and fairness. We have been pleased that these concepts are incorporated into your recent Executive Order on racial equity,1 as well as the White House’s AI Bill of Rights2 and many other policy proposals. The DOJ, FTC, CFPB, and EEOC also recently released a joint statement underscoring their commitment to combating discrimination in automated systems.3 Any trade agreement must be consistent with, and not undermine, these policies and the values they are advancing.Now, we have learned that the U.S. may be considering proposals for IPEF and other trade agreement negotiations that could sabotage efforts to prevent and remedy algorithmic discrimination, including provisions that could potentially preempt executive and Congressional legal authority to advance these goals. Such provisions may make it harder or impossible for Congress or executive agencies to adopt appropriate policies while also respecting our international trade commitments. For example, trade provisions that guarantee digital firms new secrecy rights over source code and algorithms could thwart potential algorithmic impact assessment and audit requirements, such as testing for racial bias or other violations of U.S. law and regulation. And because the trade negotiations are secret, we do not know how the exact language could affect pivotal civil rights protections. Including such industry-favored provisions in trade deals like IPEF would be a grievous error and undermine the Administration’s own policy goals. We urge the administration to not submit any proposals that could undermine the ability to protect the civil rights of people in the United States, particularly with regard to digital trade. Moreover, there is a great need for transparency in these negotiations. Text already proposed should be made public so the civil rights community and relevant experts can challenge any provisions that could undermine administration goals regarding racial equity, transparency, and fairness. We know that your administration shares our goals of advancing racial equity, including protecting the public from algorithmic discrimination. Thank you for your leadership in this area. For questions or further discussion, please contact Harlan Yu (harlan@upturn.org), David Brody (dbrody@lawyerscommittee.org), and Emily Peterson-Cassin (epetersoncassin@citizen.org).Sincerely,American Civil Liberties Union Center for Democracy & Technology Center for Digital Democracy Data & Society Research Institute Demand Progress Education Fund Electronic Privacy Information Center (EPIC) Fight for the Future Lawyers’ Committee for Civil RightsUnder LawThe Leadership Conference on Civil andHuman Rights NAACPNational Urban League Public Citizen Sikh American Legal Defense andEducation Fund UpturnCC:Secretary of Commerce Gina Raimondo U.S. Trade Representative Katherine TaiNational Economic Council Director Lael BrainardNational Security Advisor Jake SullivanDomestic Policy Council Director Susan RiceIncoming Domestic Policy Council Director Neera TandenDomestic Policy Council Deputy Director for Racial Justice and Equity Jenny Yang1 Exec. Order No. 14091, 88 Fed. Reg. 10825, Feb. 16, 2023, available at https://www.federalregister.gov/documents/2023/02/22/2023-03779/further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal.2 The White House, Blueprint for an AI Bill of Rights, Oct. 22, 2022, available at https://www.whitehouse.gov/ostp/ai-bill-of-rights.3 Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, CFPB, DOJ, EEOC, FTC, April 25, 2023, available at https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
-
Commercial Surveillance expands via the "Big" Screen in the Home Televisions now view and analyze us—the programs we watch, what shows we click on to consider or save, and the content reflected on the “glass” of our screens. On “smart” or connected TVs, streaming TV applications have been engineered to fully deliver the forces of commercial surveillance. Operating stealthily inside digital television sets and streaming video devices is an array of sophisticated “adtech” software. These technologies enable programmers, advertisers and even TV set manufacturers to build profiles used to generate data-driven, tailored ads to specific individuals or households. These developments raise important questions for those concerned about the transparency and regulation of political advertising in the United States.Also known as “OTT” (“over-the-top” since the video signal is delivered without relying on traditional set-top cable TV boxes), the streaming TV industry incorporates the same online advertising techniques employed by other digital marketers. This includes harvesting a cornucopia of information on viewers through alliances with leading data-brokers. More than 80 percent of Americans now use some form of streaming or Smart TV-connected video service. Given such penetration, it is no surprise that streaming TV advertising is playing an important role in the upcoming midterm elections. And, streaming TV will be an especially critical channel for campaigns to vie for voters in 2024. Unlike political advertising on broadcast television or much of cable TV, which is generally transmitted broadly to a defined geographic market area, “addressable” streaming video ads appear in programs advertisers know you actually watch (using technologies such as dynamic ad insertion). Messaging for these ads can also be fine-tuned as a campaign progresses, to make the message more relevant to the intended viewer. For example, if you watch a political ad and then sign up to receive campaign literature, the next TV commercial from a candidate or PAC can be crafted to reflect that action. Or, if your data profile says you are concerned about the costs of healthcare, you may see a different pitch than your nextdoor neighbor who has other interests. Given the abundance of data available on households, including demographic details such as race and ethnicity, there will also be finely tuned pitches aimed at distinct subcultures produced in multiple languages.An estimated $1.4 billion dollars will be spent on streaming political ads for the midterms (part of an overall $9 billion in ad expenditures). With more people “cutting the cord” by signing up for cheaper, ad-supported streaming services, advances in TV technologies to enable personalized data-driven ad targeting, and the integration of streaming TV as a key component of the overall online marketing apparatus, it is evident that the TV business has changed. Even what’s considered traditional broadcasting has been transformed by digital ad technologies. That’s why it’s time to enact policy safeguards to ensure integrity, fairness, transparency and privacy for political advertising on streaming TV. Today, streaming TV political ads already combine information from voter records with online and offline consumer profile data in order to generate highly targeted messages. By harvesting information related to a person’s race and ethnicity, finances, health concerns, behavior, geolocation, and overall digital media use, marketers can deliver ads tied to our needs and interests. In light of this unprecedented marketing power and precision, new regulations are needed to protect consumer privacy and civic discourse alike. In addition to ensuring voter privacy, so personal data can’t be as readily used as it is today, the messaging and construction of streaming political ads must also be accountable. Merely requiring the disclosure of who is buying these ads is insufficient. The U.S. should enact a set of rules to ensure that the tens of thousands of one-to-one streaming TV ads don’t promote misleading or false claims, or engage in voter suppression and other forms of manipulation. Journalists and campaign watchdogs must have the ability to review and analyze ads, and political campaigns need to identify how they were constructed—including the information provided by data brokers and how a potential voter’s viewing behaviors were analyzed (such as with increasingly sophisticated machine learning and artificial intelligence algorithms). For example, data companies such as Acxiom, Experian, Ninth Decimal, Catalina and LiveRamp help fuel the digital video advertising surveillance apparatus. Campaign-spending reform advocates should be concerned. To make targeted streaming TV advertising as effective as possible will likely require serious amounts of money—for the data, analytics, marketing and distribution. Increasingly, key gatekeepers control much of the streaming TV landscape, and purchasing rights to target the most “desirable” people could face obstacles. For example, smart TV makers– such as LG, Roku, Vizio and Samsung– have developed their own exclusive streaming advertising marketplaces. Their smart TVs use what’s called ACR—”automated content recognition”—to collect data that enables them to analyze what appears on our screens—“second by second.” An “exclusive partnership to bring premium OTT inventory to political clients” was recently announced by LG and cable giant Altice’s ad division. This partnership will enable political campaigns that qualify to access 30 million households via Smart TVs, as well as the ability to reach millions of other screens in households known to Altice. Connected TVs also provide online marketers with what is increasingly viewed as essential for contemporary digital advertising—access to a person’s actual identity information (called “first-party” data). Streaming TV companies hope to gain permission to use subscriber information in many other ways. This practice illustrates why the Federal Trade Commission’s (FTC) current initiative designed to regulate commercial surveillance, now in its initial stage, is so important. Many of the critical issues involving streaming political advertising could be addressed through strong rules on privacy and online consumer protection. For example, there is absolutely no reason why any marketer can so easily obtain all the information used to target us, such as our ethnicity, income, purchase history, and education—to name only a few of the variables available for sale. Nor should the FTC allow online marketers to engage in unfair and largely stealth tactics when creating digital ads—including the use of neuroscience to test messages to ensure they respond directly to our subconscious. The Federal Communications Commission (FCC), which has largely failed to address 21st century video issues, should conduct its own inquiry “in the public interest.” There is also a role here for the states, reflecting their laws on campaign advertising as well as ensuring the privacy of streaming TV viewers.This is precisely the time for policies on streaming video, as the industry becomes much more reliant on advertising and data collection. Dozens of new ad-supported streaming TV networks are emerging—known as FAST channels (Free Ad Supported TV)—which offer a slate of scheduled shows with commercials. Netflix and Disney+, as well as Amazon, have or are soon adopting ad-supported viewing. There are also coordinated industry-wide efforts to perfect ways to more efficiently target and track streaming viewers that involve advertisers, programmers and device companies. Without regulation, the U.S. streaming TV system will be a “rerun” of what we historically experienced with cable TV—dashed expectations of a medium that could be truly diverse—instead of a monopoly—and also offer both programmers and viewers greater opportunities for creative expression and public service. Only those with the economic means will be able to afford to “opt-out” of the advertising and some of the data surveillance on streaming networks. And political campaigns will be allowed to reach individual voters without worry about privacy and the honesty of their messaging. Both the FTC and FCC, and Congress if it can muster the will, have an opportunity to make streaming TV a well-regulated, important channel for democracy. Now is the time for policymakers to tune in.***This essay was originally published by Tech Policy Press.Support for the Center for Digital Democracy’s review of the streaming video market is provided by the Rose Foundation for Communities and the Environment.Jeff Chester
-
Time for the FTC to intervene as marketers create new ways to leverage our “identity” data as cookies “crumble” For decades, the U.S. has allowed private actors to basically create the rules regarding how our data is gathered and used online. A key reason that we do not have any real privacy for digital media is precisely because it has principally been online marketing interests that have shaped how the devices, platforms and applications we use ensnare us in the commercial surveillance complex. The Interactive Advertising Bureau (IAB) has long played this role through an array of standards committees that address everything from mobile devices to big data-driven targeting to ads harnessing virtual reality, to name a few. As this blog has previously covered, U.S. commercial online advertising, spearheaded by Google, the Trade Desk and others, is engaged in a major transformation of how it processes and characterizes data used for targeted marketing. For various reasons, the traditional ways we are profiled and tracked through the use of “cookies” are being replaced by a variety of schemes that enable advertisers to know and take advantage of our identities, but which they believe will (somehow!) pass muster with any privacy regulations now in force or potentially enacted. What’s important is that regardless of the industry rhetoric that these approaches will empower a person’s privacy, at the end of the day they are designed to ensure that the comprehensive tracking and targeting system remains firmly in place.As an industry trade organization, the IAB serves as a place to generate consensus, or agreed-upon formats, for digital advertising practices. To help the industry’s search for a way to maintain its surveillance business model approach, it has created what’s called “Project Rearc” to “re-architect digital marketing.” The IAB explains that Project Rearc “is a global call-to-action for stakeholders across the digital supply chain to re-think and re-architect digital marketing to support core industry use cases, while balancing consumer privacy and personalization.” It has set up a number of industry-run working groups to advance various components of this “re-architecting,” including what’s called an “Accountability Working Group.” Its members include Experian, Facebook, Google, Axel Springer, Nielsen, Pandora, TikTok, Nielsen, Publicis, Group M, Amazon, IABs from the EU, Australia, and Canada, Disney, Microsoft, Adobe, News Corp., Roku and many more (including specialist companies with their own “identity” for digital marketing approaches, such as Neustar and LiveRamp).The IAB Rearc effort has put out for “public comment” a number of proposed approaches for addressing elements of the new ways to target us via identifiers, cloud processing, and machine learning. Earlier this year, for example, it released for comment proposed standards on a “Global Privacy Platform;” an “Accountability Platform,” “Best Practices for User-Enabled Identity Tokens,” and a “Taxonomy and Data Transparency Standards to Support seller-defined Audience and Context Signaling.”Now it has released for public comment (due by November 12, 2021) a proposed method to “Increase Transparency Across Entire Advertising Supply Chain for New ID usage.” This proposal involves critical elements on the data collected about us and how it can be used. It is designed to “provide a standard way for companies to declare which user identity sources they use” and “ease ad campaign execution between advertisers, publishers, and their chosen technology providers….” This helps online advertisers use “different identity solutions that will replace the role of the third-party cookie,” explains the IAB. While developed in part for a “transparent supply chain” and to help build “auditable data structures to ensure consumer privacy,” its ultimate function is to enable marketers to “activate addressable audiences.” In other words, it’s all about continuing to ensure that digital marketers are able to build and leverage numerous individual and group identifiers to empower their advertising activities, and withstand potential regulatory threats about privacy violations.The IAB’s so-called public comment system is primarily designed for the special interests whose business model is the mass monetization of all our data and behaviors. We should not allow these actors to define how our everyday experiences with data operate, especially when privacy is involved. The longstanding role in which the IAB and online marketers have set many of the standards for our online lives should be challenged—by the FTC, Congress, state AGs and everyone else working on these issues.We—the public—should be determining our “digital destiny”—not the same people that gave us surveillance marketing in the first place.
-
Blog
The Big Data Merger Gold Rush to Control Your “Identity” Information
Will the DoJ ensure that both competition and consumer protection in data markets are addressed?
There is a digital data “gold rush” fever sweeping the data and marketing industry, as the quest to find ways to use data to determine a person’s “identity” for online marketing becomes paramount. This is triggered, in part, by the moves made by Google and others to replace “cookies” and other online identifiers with new, allegedly pro-privacy data-profiling methods to get the same results. We’ve addressed this privacy charade in other posts. In order to better position themselves in a world where knowing who we are and what we do is a highly valuable global currency, there are an increasing number of mergers and acquisitions in the digital marketing and advertising sector.For example, last week data-broker giant TransUnion announced it is buying identity data company Neustar for $3.1 billion dollars, to further expand its “powerful digital identity capabilities.” This is the latest in TransUnion’s buying spree to acquire data services companies that give it even more information on the U.S. public, including what we do on streaming media, via its 2020 takeovers of connected and streaming video data company Tru Optik (link is external) and the data-management-focused Signal. (link is external)In reviewing some of the business practices touted by TransUnion and Neustar, it’s striking that so little has changed in the decades CDD has been sounding the alarm about the impacts data-driven online marketing services have on society. These include the ever-growing privacy threats, as well as the machine-driven sorting of people and the manipulation of our behaviors. So far, nothing has derailed the commercial Big Data marketing.With this deal, TransUnion is obtaining a treasure trove of data assets and capabilities. For Neustar, “identity is an actionable understanding of who or what is on the other end of every interaction and transaction.” Neustar’s “OneID system provides a single lens on the consumer across their dynamic omnichannel journey.” This involves: (link is external) data management services featuring the collection, identification, tagging, tracking, analyzing, verification, correcting and sorting of business data pertaining to the identities, locations and personal information of and about consumers, including individuals, households, places, businesses, business entities, organizations, enterprises, schools, governments, points of interest, business practice characteristics, movements and behaviors of and about consumers via media devices, computers, mobile phones, tablets and internet connected devices.Neustar keeps close track of people, saying that it knows that “the average person has approximately 15 distinct identifiers with an average of 8 connected devices” (and notes that an average household has more than 45 such distinct identifiers). Neustar has an especially close business partnership with Facebook, (link is external) which enables marketers to better analyze how their ads translate into sales made on and spurred by that platform. Its “Customer Scoring and Segmentation” system enables advertisers to identify and classify targets so they can “reach the right customer with the right message in the right markets.” Neustar has a robust data-driven ad-targeting system called AdAdvisor, which reaches 220 million adults in “virtually every household in the U.S.” AdAdvisor (link is external) “uses past behavior to predict likelihood of future behavior” and involves “thousands of data points available for online targeting” (including the use of “2 billion records a month from authoritative offline sources”). Its “Propensity Audiences” service helps marketers predict the behaviors of people, incorporating such information (link is external) as “customer-level purchase data for more than 230 million US consumers; weekly in-store transaction data from over 4,500 retailers; actual catalog purchases by more than 18 million households”; and “credit information and household-level demographics, used to build profiles of the buying power, disposable income and access to credit a given household has available.” Neustar offers its customers the ability to reach “propensity audiences” in order to target such product categories as alcohol, automotive, education, entertainment, grocery, life events, personal finance, and more. For example, companies can target people who have used their debit or credit cards, by the amount of insurance they have on their homes or cars, by the “level of investable assets,” including whether they have a pension or other retirement funds. One also can discover people who buy a certain kitty litter or candy bar—the list of AdAdvisor possibilities is far-reaching.Another AdAdvisor application, “ElementOne,” (link is external) comprises 172 segments that can be “leveraged in real time for both online and offline audience targeting.” The targeting categories should be familiar to anyone who is concerned about how groups of people are characterized by data-brokers and others. For example, one can select “Segment 058—high income rural younger renters with and without children—or “Segment 115—middle income city older home owners without children; or any Segment from 151-172 to reach “low income” Americans who are renters, homeowners, have or don’t have kids, live in rural or urban areas, and the like.Marketers can also use AdAdvisor to determine the geolocation behaviors of their targets, through partnerships that provide Neustar with “10 billion daily location signals from 250+ million opted-in consumers.” In other words, Neustar knows whether you walked into that liquor store, grocery chain, hotel, entertainment venue, or shop. It also has data on what you view on TV, streaming video, and gaming. And it’s not just consumers who Neustar tracks and targets. Companies can access its “HealthLink Dimensions Doctor Data to target 1.7 million healthcare professionals who work in more than 400 specialties, including acute care, family practice, pediatrics, cardiovascular surgery.”TransUnion is already a global data and digital marketing powerhouse, with operations in 30 countries, 8,000 clients that include 60 of the Fortune 100. What is calls its “TruAudience Marketing Solutions (link is external)” is built on a foundation of “insight into 98% of U.S. adults and more than 127 million homes, including 80 million connected homes.” Its “TruAudience Identity” product provides “a three-dimensional, omnichannel view of individuals, devices and households… [enabling] precise, scalable identity across offline, digital and streaming environments.” It offers marketers and others a method to secure what it terms is an “identity resolution,” (link is external) which is defined as “the process of matching identifiers across devices and touchpoints to a single profile [that] helps build a cohesive, omnichannel view of a consumer….”TransUnion, known historically as one of the Big Three credit bureaus, has pivoted to become a key source for data and applications for digital marketing. It isn’t the only company expanding what is called an “ID Graf (link is external)”—the ways all our data are gathered for profiling. However, given its already vast storehouse of information on Americans, it should not be allowed to devour another major data-focused marketing enterprise.Since this merger is now before the U.S. Department of Justice—as opposed to the Federal Trade Commission—there isn’t a strong likelihood that in addition to examining the competitive implications of the deal, there will also be a focus on what this really means for people, in terms of further loss of privacy, their autonomy and their potential vulnerability to manipulative and stealthy marketing applications that classify and segment us in a myriad of invisible ways. Additionally, the use of such data systems to identify communities of color and other groups that confront historic and current obstacles to their well-being should also be analyzed by any competition regulator.In July, the Biden Administration issued (link is external) an Executive Order on competition that called for a more robust regime to deal with mergers such as TransUnion and Neustar. According to that order, “It is also the policy of my Administration to enforce the antitrust laws to meet the challenges posed by new industries and technologies, including the rise of the dominant Internet platforms, especially as they stem from serial mergers, the acquisition of nascent competitors, the aggregation of data, unfair competition in attention markets, the surveillance of users, and the presence of network effects.”We hope the DOJ will live up to this call to address mergers such as this one, and other data-driven deals that are a key reason why these kind of buyouts happen with regularity. There should also be a way for the FTC—especially under the leadership of Chair Lina Khan—to play an important role evaluating this and similar transactions. There’s more at stake than competition in the data-broker or digital advertising markets. Who controls our information and how that information is used are the fundamental questions that will determine our freedom and our economic opportunities. As the Big Data marketplace undergoes a key transition, developing effective policies to protect public privacy and corporate competition is precisely why this moment is so vitally important. -
Blog
Surveillance Marketing Industry Claims Future of an “Open Internet” Requires Massive Data Gathering
New ways to take advantage of your “identity” raise privacy, consumer-protection and competition issues
The Trade Desk is a leading (link is external) AdTech company, providing data-driven digital advertising services (link is external) to major brands and agencies. It is also playing an outsized role responding to the initiative led by Google (link is external) to create new, allegedly “privacy-friendly” approaches to ad targeting, which include ending the use of what are called “third-party” cookies. These cookies enable the identification and tracking of individuals, and have been an essential building block for surveillance advertising since the dawn (link is external) of the commercial Internet. As we explained in a previous post about the so-called race to “end” the use of cookies, the online marketing industry is engaged in a full-throated effort to redefine how our privacy is conceptualized and privately governed. Pressure from regulators (such as the EU’s GDPR) and growing concerns about privacy from consumers are among the reasons why this is happening now. But the real motivation, in my view, is that the most powerful online ad companies and global brands (such as Google, Amazon and the Trade Desk) don’t need these antiquated cookies anymore. They have so much of our information that they collect directly, and also available from countless partners (such as global brands). Additionally, they now have many new ways to determine who we are—our “identity”—including through the use of AI, machine learning and data clouds (link is external). “Unified ID 2.0” is what The Trade Desk calls its approach to harvesting our identity information for advertising. Like Google, they claim to be respectful of data protection principles. Some of the most powerful companies in the U.S. are supporting the Unified ID standard, including Walmart, Washington Post, P&G, Comcast, CBS, Home Depot, Oracle, and Nielsen. But more than our privacy is at stake as data marketing giants fight over how best to reap the financial rewards (link is external) of what is predicted eventually to become a trillion dollar global ad marketplace. This debate is increasingly focused on the very future of the Internet itself, including how it is structured and governed. Only by ensuring that advertisers can continue to successfully operate powerful data-gathering and ad-targeting systems, argues Trade Desk CEO Jeff Green, can the “Open (link is external) Internet” be preserved. His argument, of course, is a digital déjà vu version of what media moguls have said in the U.S. dating back to commercial radio in the 1930’s. Only with a full-blown, ad-supported (and regulation-free) electronic media system, whether it was broadcast radio, broadcast TV, or cable TV, could the U.S. be assured it would enjoy a democratic and robust communications environment. (I was in the room at the Department of Commerce back in the middle 1990’s when advertisers were actually worried that the Internet would be largely ad-free; the representative from P&G leaned over to tell me that they never would let that happen—and he was right.) Internet operations are highly influenced to serve the needs of advertisers, who have reworked its architecture to ensure we are all commercially surveilled. For decades, the online ad industry has continually expanded ways to monetize our behaviors, emotions, location and much more. (link is external) Last week, The Trade Desk unveiled its latest iteration using Unified ID 2.0—called Solimar (see video (link is external) here). Solimar uses “an artificial intelligence tool called Koa (link is external), which makes suggestions” to help ensure effective marketing campaigns. Reflecting the serial partnerships that operate to provide marketers with a gold mine of information on any individual, The Trade Desk has a “Koa Identity (link is external) Alliance,” a “cross-device graph that incorporates leading and emerging ID solutions such as LiveRamp Identity Link, Oracle Cross Device, Tapad (link is external) Device Graph, and Adbrain Device Graf.” This system, they say, creates an effective way for marketers to develop a data portrait of individual consumers. It’s useful to hear what companies such as The Trade Desk say as we evaluate claims that “big data” consumer surveillance operations are essential for a democratically structured Internet. In its most recent Annual Report (link is external), the company explains that “Through our self-service, cloud-based platform, ad buyers can create, manage, and optimize more expressive data-driven digital advertising campaigns across ad formats and channels, including display, video, audio, in-app, native and social, on a multitude of devices, such as computers, mobile devices, and connected TV (‘CTV’)…. We use the massive data captured by our platform to build predictive models around user characteristics, such as demographic, purchase intent or interest data. Data from our platform is continually fed back into these models, which enables them to improve over time as the use of our platform increases.” And here’s how The Trade Desk’s Koa’s process is described in the trade publication Campaign (link is external) Asia: …clients can specify their target customer in the form of first-party or third-party data, which will serve as a seed audience that Koa will model from to provide recommendations. A data section provides multiple options for brands to upload first-party data including pixels, app data, and IP addresses directly into the platform, or import data from a third-party DMP or CDP. If a client chooses to onboard CRM data in the form of email addresses, these will automatically be converted into UID2s. Once converted, the platform will scan the UID2s to evaluate how many are ‘active UID2s’, which refers to how many of these users have been active across the programmatic universe in the past week. If the client chooses to act on those UID2s, they will be passed into the programmatic ecosystem to match with the publisher side, building the UID2 ecosystem in tandem. For advertisers that don't have first-party data… an audiences tab allows advertisers to tap into a marketplace of second- and third-party data so they can still use interest segments, purchase intent segments and demographics. In other words, these systems have a ton of information about you. They can easily get even more data and engage in the kinds of surveillance advertising that regulators (link is external) and consumer (link is external) advocates around the world are demanding be stopped. There are now dozens of competing “identity solutions”—including those from Google, Amazon (link is external), data brokers (link is external), telephone (link is external) companies, etc. (See visual at bottom of page here (link is external)). The stakes here are significant—how will the Internet evolve in terms of privacy, and will its core “DNA” be ever-growing forms of surveillance and manipulation? How do we decide the most privacy-protective ways to ensure meaningful monetization of online content—and must funding for such programming only be advertising-based? In what ways are some of these identity proposals a way for powerful platforms such as Google to further expand its monopolistic control of the ad market? These and other questions require a thoughtful regulator in the U.S. to help sort this out and make recommendations to ensure that the public truly benefits. That’s why it’s time for the U.S. Federal Trade Commission to step in. The FTC should analyze these advertising-focused identity efforts; assess their risks and the benefits; address how to govern the collection and use of data where a person has supposedly given permission to a brand or store to use it (known as “first-party” data). A key question, given today’s technologies, is whether meaningful personal consent for data collection is even possible in a world driven by sophisticated and real-time AI systems that personalize content and ads? The commission should also investigate the role of data-mining clouds and other so-called “clean” rooms where privacy is said to prevail despite their compilation of personal information for targeted advertising. The time for private, special interests (and conflicted) actors to determine the future of the Internet, and how our privacy is to be treated, is over. -
Press Release
Against surveillance-based advertising
CDD joins an international coalition of more than 50 NGOs and scholars in a call for a surveillance-based advertising ban in its Digital Services Act and for the U.S. to enact a federal digital privacy and civil rights law
International coalition calls for action against surveillance-based advertising Every day, consumers are exposed to extensive commercial surveillance online. This leads to manipulation, fraud, discrimination and privacy violations. Information about what we like, our purchases, mental and physical health, sexual orientation, location and political views are collected, combined and used under the guise of targeting advertising. In a new report, the Norwegian Consumer Council (NCC) sheds light on the negative consequences that this commercial surveillance has on consumers and society. Together with [XXX] organizations and experts, NCC is asking authorities on both sides of the Atlantic to consider a ban. In Europe, the upcoming Digital Services Act can lay the legal framework to do so. In the US, legislators should seize the opportunity to enact comprehensive privacy legislation that protects consumers. - The collection and combination of information about us not only violates our right to privacy, but renders us vulnerable to manipulation, discrimination and fraud. This harms individuals and society as a whole, says the director of digital policy in the NCC, Finn Myrstad. In a Norwegian population survey conducted by YouGov on behalf of the NCC, consumers clearly state that they do not want commercial surveillance. Just one out of ten respondents were positive to commercial actors collecting personal information about them online, while only one out of five thought that ads based on personal information is acceptable. - Most of us do not want to be spied on online, or receive ads based on tracking and profiling. These results mirror similar surveys from Europe and the United States, and should be a powerful signal to policymakers looking at how to better regulate the internet, Myrstad says. Policymakers and civil society organisations on both sides of the Atlantic are increasingly standing up against these invasive practices. For example, The European Parliament and the European Data Protection Supervisor (EDPS) have already called for phasing out and banning surveillance-based advertising. A coalition of consumer and civil rights organizations in the United States has called for a similar ban. Significant consequences The NCC report ’Time to ban surveillance-based advertising’ exposes a variety of harmful consequences that surveillance-based advertising can have on individuals and on society: 1. Manipulation Companies with comprehensive and intimate knowledge about us can shape their messages in attempts to reach us when we are susceptible, for example to influence elections or to advertise weight loss products, unhealthy food or gambling. 2. Discrimination The opacity and automation of surveillance-based advertising systems increase the risk of discrimination, for example by excluding consumers based on income, gender, race, ethnicity or sexual orientation, location, or by making certain consumers pay more for products or services. 3. Misinformation The lack of control over where ads are shown can promote and finance false or malicious content. This also poses significant challenges to publishers and advertisers regarding revenue, reputational damage, and opaque supply chains. 4. Undermining competition The surveillance business model favours companies that collect and process information across different services and platforms. This makes it difficult for smaller actors to compete, and negatively impacts companies that respect consumers’ fundamental rights. 5. Security risks When thousands of companies collect and process enormous amounts of personal data, the risk of identity theft, fraud and blackmail increases. NATO has described this data collection as a national security risk. 6. Privacy violations The collection and use of personal data is happening with little or no control, both by large companies and by companies that are unknown to most consumers. Consumers have no way to know what data is collected, who the information is shared with, and how it may be used. - It is very difficult to justfy the negative consequences of this system. A ban will contribute to a healthier marketplace that helps protect individuals and society, Myrstad comments. Good alternatives In the report, the NCC points to alternative digital advertising models that do not depend on the surveillance of consumers, and that provide advertisers and publishers more oversight and control over where ads are displayed and which ads are being shown. - It is possible to sell advertising space without basing it on intimate details about consumers. Solutions already exist to show ads in relevant contexts, or where consumers self-report what ads they want to see, Myrstad says. - A ban on surveillance-based advertising would also pave the way for a more transparent advertising marketplace, diminishing the need to share large parts of ad revenue with third parties such as data brokers. A level playing field would contribute to giving advertisers and content providers more control, and keep a larger share of the revenue. The coordinated push behind the report and letter illustrates the growing determination of consumer, digital rights, human rights and other civil society groups to end the widespread business model of spying on the public. -
The Center for Digital Democracy and 23 other leading civil society groups sent a letter to President Biden today asking his Administration to ensure that any new transatlantic data transfer deal is coupled with the enactment of U.S. laws that reform government surveillance practices and provide comprehensive privacy protections.