CDD

Newsroom

  • Contact:David Monahan, Fairplay: david@fairplayforkids.org Day of Action as Advocates for Youth Urge Hill:Pass the Kids Online Safety Act Now Momentum keeps growing: 217 organizations call on Congress to address the youth mentalhealth crisis spurred by social media WASHINGTON, D.C. – Wednesday, November 8, 2023 – Today, a huge coalition of advocacy groups is conducting a day of action urging Congress to finally address the youth mental health crisis and pass the Kids Online Safety Act (KOSA, S. 1409). Momentum in support of the bill continues to build and today 217 groups which advocate for children and teens across a myriad of areas–including mental health, privacy, suicide prevention, eating disorders, and child sexual abuse prevention–are sending a letter urging Senate Majority Leader Schumer and Senate Minority Leader McConnell to move KOSA to a floor vote by the end of this year. “After numerous hearings and abundant research findings,” the coalition writes, “the evidence is clear of the potential harms social media platforms can have on the brain development and mental health of our nation’s youth, including hazardous substance use, eating disorders, and self-harm.” “With this bipartisan legislation,” they write, “Congress has the potential to significantly improve young people’s wellbeing by transforming the digital environment for children and teens.” KOSA, authored by Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) enjoys growing bi-partisan support; it is endorsed by 48 US Senators–24 from each side of the aisle. Today’s day of action will see supporters of the 217 organizations calling senators to urge a floor vote and support for KOSA. The letter and day of action follow: a suit by 42 attorneys general against Meta for exploiting young users’ vulnerabilities on Instagram; persistent calls to pass KOSA from parents of children who died from social media harms; and Tuesday’s Senate Judiciary Committee hearing on Social Media and the Teen Mental Health Crisis. COMMENTS: Josh Golin, Executive Director of Fairplay:“Every day that Congress allows social media companies to self-regulate, children suffer, and even die, from preventable harms and abuses online. The Kids Online Safety Act would force companies like Meta, TikTok and Snap to design their platforms in ways that reduce risks for children, and create a safer and less addictive internet for young people. James P. Steyer, Founder and CEO of Common Sense Media:"With a new whistleblower and 42 states filing suit against Meta for its deceptive practices and dangerous platform design, the growing support and urgent need for KOSA is now too strong to ignore. Common Sense will continue to work with lawmakers and advocates on both sides of this bill to once and for all begin to curb the harms that online platforms are causing for youngpeople. Sacha Haworth, Executive Director of the Tech Oversight Project:“The disturbing revelations sadly add to a mountain of evidence proving that tech companies are willfully negligent and even openly hostile to protecting minors from the harms their products bring. As a mother, my heart breaks for the kids who experienced pain and harassment online because tech executives were willing to sacrifice their physical and emotional health in pursuit of profit. Lives are on the line, and we cannot sit on the sidelines. We need to pass KOSA to force companies like Meta to protect children and teens and treat them with the dignity they deserve.” Katharina Kopp, Director of Policy, Center for Digital Democracy:“The public health crisis that children and teens experience online requires an urgent intervention from policymakers. We need platform accountability and an end to the exploitation of young people. Their well-being is more important than the ‘bottom-line’ interests of platforms. KOSA will prevent companies taking advantage of the developmental vulnerabilities of children and teens. We urge the U.S Senate to bring KOSA to a floor vote by the end of the year.” ###
  • Blog

    Is So-called Contextual Advertising the Cure to Surveillance-based “Behavioral” Advertising?

    Contextual advertising might soon rival or even surpass behavioral advertising’s harms unless policy makers intervene

    Contextual advertising is said to be privacy-safe because it eliminates the need for cookies, third-party trackers, and the processing of other personal data. Marketers and policy makers are placing much stock in the future of contextual advertising, viewing it as the solution to the privacy-invasive targeted advertising that heavily relies on personal data.However, the current state of contextual advertising does not look anything like our plain understanding of it in contrast to today's dominant mode of behavioral advertising: placing ads next to preferred content, based on keyword inclusion or exclusion. Instead, industry practices are moving towards incorporating advanced AI analysis of content and its classification, user-level data, and insights into content preferences of online visitors, all while still referring to “contextual advertising.” It is crucial for policymakers to carefully examine this rapidly evolving space and establish a clear definition of what “contextual advertising” should entail. This will prevent the emergence of toxic practices and outcomes, similar to what we have witnessed with surveillance-based behavioral marketing, from becoming the new normal.Let’s recall the reasons for the strong opposition to surveillance-based marketing practices so we can avoid those harms regarding contextual advertising. Simply put, the two main reasons are privacy harms and harms from manipulation. Behavioral advertising is deeply invasive when it comes to privacy, as it involves tracking users online and creating individual profiles based on their behavior over time and across different platforms and across channels. These practices go beyond individual privacy violations and also harm groups of people, perpetuating or even exacerbating historical discrimination and social inequities.The second main reason why many oppose surveillance-based marketing practices is the manipulative nature of commercial messaging that aims to exploit users’ vulnerabilities. This becomes particularly concerning when vulnerable populations, like children, are targeted, as they may not have the ability to resist sophisticated influences on their decision-making. More generally, the behavioral advertising business heavily incentivizes companies to optimize their practices for monetizing attention and selling audiences to advertisers, leading to many associated harms.New and evolving practices in contextual advertising should raise questions for policy makers. They should consider whether the harms we sought to avoid with behavioral marketing may resurface in these new advertising practices as well.Today’s contextual advertising methods are taking advantage of the latest analytical technologies to interpret online content so that contextual ads will likely soon be able to manipulate us just as behavioral ads can. Artificial intelligence (AI), machine learning, natural language processing models for tone and sentiment analysis, computer vision, audio analysis, and more are being used to consider a multitude of factors and in this way “dramatically improve the effectiveness of contextual targeting.” Gumgum’s Verity, for example, “scans text, image, audio and video to derive human-like understandings.” Attention measures – the new performance metric that advertisers crave – indicate that contextual ads are more effective than non-contextual ads. Moments.AI, a “real-time contextual targeting solution” by the Verve Group, for example, allows brands to move away from clicks and to “optimize towards consumer attention instead,” for “privacy-first” advertising solutions.Rather than analyzing one single URL or one article at a time, marketers can analyze a vast range of URLs and can “understand content clusters and topics that audiences are engaging with at that moment” and so use contextual targeting at scale. The effectiveness and sophistication of contextual advertising allows marketers to use it not just for enhancing brand awareness, but also for targeting prospects. In fact, the field of “neuroprogammatic” advertising “goes beyond topical content matching to target the subconscious feelings that lead consumers to make purchasing decisions,” according to one industry observer. Marketers can take advantage of how consumers “are feeling and thinking, and what actions they may or may not be in the mood to take, and therefore how likely are to respond to an ad. Neuroprogrammatic targeting uses AI to cater to precisely what makes us human.”These sophisticated contextual targeting practices may have negative effects similar to those of behavioral advertising, however. For instance, contextual ads on weight loss programs can be placed alongside content related to dieting and eating disorders due to its semantic, emotional, and visual content. This may have disastrous consequences similar to targeted behavioral ads aimed at teenagers with eating disorders. Therefore, it is important to question how different these practices are from individual user tracking and ad targeting. If content can be analyzed and profiled along very finely tuned classification schemes, advertisers don’t need to track users across the web. They simply need to track the content that will deliver the relevant audience and engage individuals based on their interests and feelings.Apart from the manipulative nature of contextual advertising, the use of personal data and associated privacy violations are also concerning. Many contextual ad tech companies claim to engage in contextual targeting “without any user data.” But, in fact, so-called contextual ad tech companies often rely on session data such as browser and page-level data, device and app-level data, IP address, and “whatever other info they can get their hands on to model the potential user,” framing it as “contextual 2.0.” Until recently, this practice might have been more accurately referred to as device fingerprinting. The claim is that session data is not about tracking, but only about the active session and usage at one point in time. No doubt, however, the line between contextual and behavioral advertising becomes blurry when such data is involved.Location-based targeting is another aspect of contextual advertising that raises privacy concerns. Should location-based targeting be considered contextual? Uber’s “Journey Ads” lets advertisers target users based on their destination. A trip to a restaurant might trigger alcohol ads; a trip to the movie theater might result in ads for sugary beverages. According to AdExchanger, Uber claims that it is not “doing any individual user-based targeting” and suggests that it is a form of contextual advertising.Peer 39 also includes location data in its ad-targeting capabilities and still refers to these practices as contextual advertising. The use of location data can reveal some of the most sensitive information about a person, including where she works, sleeps, socializes, worships, and seeks medical treatment. When combined with session data, the information obtained from sentiment, image, video, and location analysis can be used to create sophisticated inferences about individuals, and ads placed in this context can easily clash with consumer expectations of privacy.Furthermore, placing contextual ads next to user-generated content or within chat groups changes the parameters of contextual targeting. Instead of targeting the content itself, the ad becomes easily associated with an individual user. Reddit’s “contextual keyword targeting” allows advertisers to target by community and interests, discussing LGBTQ+ sensitive topics, for example. This is similar to the personalized nature of targeted behavioral advertising, and can thus raise privacy concerns.Cohort targeting, also referred to as “affinity targeting” or “content affinity targeting,” further blurs the line between behavioral and contextual advertising by combining content analytics with audience insights. “This bridges the gap between Custom Cohorts and your contextual signals, by taking learning from consented users to targeted content where a given Customer Cohort shows more engagement than the site average,” claims Permutive.Oracle uses various cohorts with demographic characteristics, including age, gender, and income, for example, as well as “lifestyle” and “retail” interests, to understand what content individuals are more likely to consume. While reputedly “designed for privacy from the ground up,” this approach allows Oracle to analyze what an audience cohort views and to “build a profile of the content types they’re most likely to engage with,” allowing advertisers to find their “target customers wherever they are online.” Playground XYZ enhances contextual data with eye-tracking data from opt-in panels, which measures attention and helps to optimize which content is most “eye-catching,” “without the need for cookies or other identifiers.”Although these practices may seem privacy neutral (relying on small samples of online users or “consented users”), they still allow advertisers to target and manipulate their desired audience. Message targeting based on content preferences of fine-tuned demographic characteristics (household income less than $20K or over $500K, for example) can lead to discriminatory practices and disparate impact that can deepen social inequities, just like the personalized targeting of online users.Hyper-contextual content analysis with a focus on measuring sentiment and attention, the use of session information, placing ads next to user-generated content as well as within interest group chats, and employing audience panels to profile content are emerging practices in contextual advertising that require critical examination. The touted privacy-first promise of contextual advertising is deceptive. It seems that contextual advertising is more manipulative, invasive of privacy, and likely to contribute to discrimination and perpetuate inequities among consumers than we all initially thought.What’s more, the convergence of highly sensitive content analytics with content profiling based on demographic characteristics (and potentially more), could result in even more potent digital marketing practices than those currently being deployed. By merging contextual data with behavioral data, marketers might gain a more comprehensive understanding of their target audience and develop more effective messaging. Additionally, we can only speculate about how modifications to the incentive structure for content delivery of audiences to advertisers might impact content quality.In the absence of policy intervention, these developments may lead to a surveillance system that is even more formidable than the one we currently have. Contextual advertising will not serve as a solution to surveillance-based “behavioral” marketing and its manipulative and privacy invasive nature, let alone the numerous other negative consequences associated with it, including the addictive nature of social media, the promotion of disinformation, and threats to public health.It is vital to formulate a comprehensive and up-to-date definition of contextual advertising that takes into consideration the adverse effects of surveillance advertising and strives to mitigate them. Industry self-regulation cannot be relied on, and legislative proposals do not adequately address the complexities of contextual advertising. The FTC’s 2009 definition of contextual advertising is also outdated in light of the advancements and practices described here. Regulatory bodies like the FTC must assess contemporary practices and provide guidelines to safeguard consumer privacy and ensure fair marketing practices. The FTC’s Children’s Online Privacy Protection Act rule update and its Commercial Surveillance and Data Security Rule provide opportunity to get it right.Failure to intervene may ultimately result in the emergence of a surveillance system disguised as consumer-friendly marketing. This article was originally published by Tech Policy Press.
    Katharina Kopp
  •  September 18, 2023 Comment on the 2023 Merger GuidelinesCenter for Digital DemocracyFTC-2023-0043 The Center for Digital Democracy (CDD) urges the U.S. Department of Justice (DoJ) and the Federal Trade Commission (FTC) to adopt the proposed merger guidelines.  The guidelines are absolutely necessary to ensure the U.S. operates a 21st century antitrust regime and doesn’t keep repeating the mistakes of the last several decades. Failures to understand and address contemporary practices, especially related to data assets, has brought us further consolidation in key markets, including in the digital media. Over rhe decades, CDD has been at the forefront of NGOs sounding the alarm on the consolidation of the digital marketing and advertising industry, including our opposition to such transactions as the Google/Doubleclick merger, Facebook/Instagram, Google/YouTube, Google/AdMob, Oracle/BlueKai and Datalogix, among others. Regulatory approval for these deals has accelerated the consolidation of the online media marketplace, where a tiny handful of companies—Alphabet (Google), Meta and Amazon—dominate the marketplace in terms of advertising revenues and online marketing applications. It has also helped deliver today’s vast commercial surveillance marketplace, with its unrelenting collection and use of information from consumers, small businesses and other potential competitors. The failure to address effectively the role that data assets and processing capabilities play in merger transactions has had unfortunate systemic consequences for the U.S. public. Privacy has been largely lost as a result, since by permitting these data-related deals both agencies signaled that policymakers approved unfettered data-driven commercial surveillance operations. It has also led to the widespread adoption by the largest commercial entities and brands, across all market verticals, to adopt the “Big Data” and personalized digital marketing applications developed by Google, Meta and Amazon—furthering the commercial surveillance stranglehold and helping fuel platform dominance. It has also had a profound and unfortunate impact on the structure of contemporary media, which have embraced the data-driven commercial surveillance paradigm with all its manipulative and discriminatory effects. In this regard, the failure to ensure meaningful antitrust policies has had consequences for the health of our democracy as well. The proposed guidelines should aid regulators better address specific transactions, their implications for specific markets, and the wider “network effects” that such digitally connected mergers trigger. An overall guideline for antitrust authorities should be an examination of the data assets assembled by each entity. Over the last half-decade or so, nearly every major company—regardless of “vertical” market served—has become a “big data” company, using both internal and external assets to leverage a range of data and decision intelligence designed to gather, process and make “actionable” data insights. Such affordances are regularly used for product development, supply, and marketing, among other uses. Artificial intelligence and machine learning applications are also “baked in” to these processes, extending the affordances across multiple operations. Antitrust regulators should inventory the data and digital assets of each proposed transaction entity, including data partnerships that extend capabilities; analyze them in terms of specific market capabilities and industry-wide standards; and review how a given combination might further anti-competitive effects (especially through leveraging data assets via cloud computing and other techniques). As markets further converge in the digital era, where, for example, data-driven marketing operations affect multiple sectors, we suggest that regulators will need to be both creative and flexible in addressing potential harms arising from cross-sectoral impacts. This point relates to Guideline 10 and “multi-sided” platforms. Regarding Guideline 3, we urge the agencies to review how both Alphabet/Google and Meta especially, as a result of prior merger approvals, have been able to determine how the broader online marketplace operates—creating a form of “coordination” problem. The advertising and data techniques developed by the two companies have had an inordinate influence over the development of online practices generally, in essence “dictating” formats, affordances, and market structures. By allowing Alphabet and Meta to grow unchecked, antitrust regulators have allowed the dog to wag the “long tail” of the digital marketplace. We also want to raise the issue of partnerships, since they are a very significant feature of the online market today. In addition to consolidation through acquisitions, companies have assembled a range of data and marketing partners who provide significant resources to these entities. This leveraging of the market through affiliates undermines competition (as well as compounding related issues involving privacy and consumer protection). The steady stream of acquisitions in rapidly evolving markets, such as “over-the-top” streaming video, that further entrenches dominant players and also creates new hurdles for potential competitors, raises the issue addressed in Guideline 8. Repeatedly, especially in digitally connected markets (such as media), there are daily acquisitions that clearly further consolidation. Today they go unchecked, something we hope will be reversed under the proposed paradigm here. Each proposed guideline is essential, in our view, to ensure that relevant information gathering and analysis are conducted for each proposed transaction. We are at a critical period of transition for markets, as data, digital media and technological progress (AI especially) continue to challenge traditional perspectives on dominance and competition. Broader network effects, regarding privacy, consumer protection and impact on democratic institutions should also be addressed by regulators moving forward. The proposed DoJ and FTC merger guidelines will provide critical guidance for the antitrust work to come.  
  • FOR IMMEDIATE RELEASEThursday, September 14, 2023 Contacts:David Monahan, Fairplay, david@fairplayforkids.orgJeff Chester, CDD, jeff@democraticmedia.org Statement of Fairplay and the Center for Digital Democracy on FTC’s Announcement: Protecting Kids From Stealth Advertising in Digital MediaBOSTON, MA, WASHINGTON DC—Today, the Federal Trade Commission released a new staff paper, “Protecting Kids from Stealth Advertising in Digital Media.” The paper’s first recommendation states:“Do not blur advertising. There should be a clear separation between kids’ entertainment/educational content and advertising, using formatting techniques and visual and verbal cues to signal to kids that they are about to see an ad.”This represents a major shift for the Commission. Prior guidance only encouraged marketers to disclose influencer and other stealth marketing to children. For years – including in filings last year and at last year’s FTC Workshop—Fairplay and the Center for Digital Democracy had argued that disclosures are inadequate for children and that stealth marketing to young people should be declared an unfair practice. Below are Fairplay’s and CDD’s comments on today’s FTC staff report:Josh Golin, Executive Director, Fairplay:“Today is an important first step towards ending an exploitative practice that is all too common on digital media for children.  Influencers—and the brands that deploy them—have been put on notice: do not disguise your ads for kids as entertainment or education.”Katharina Kopp, Deputy Director, Director of Policy, Center for Digital Democracy“Online marketing and advertising targeted at children and teens is pervasive, sophisticated and data-driven. Young people are regularly exposed to an integrated set of online marketing operations that are manipulative, unfair, invasive.  These commercial tactics can be especially harmful to the mental and physical health of youth.   We call on the FTC to build upon its new report to address how marketers use the latest cutting-edge marketing tactics to influence young people—including neuro-testing, immersive ad formats and ongoing data surveillance.”###
    person holding smartphone by Rodion Kutsaiev
  • Press Release

    Advocates demand Federal Trade Commission investigate Google for continued violations of children’s privacy law

    Following news of Google’s violations of COPPA and 2019 settlement, 4 advocates ask FTC for investigation

    Contact:Josh Golin, Fairplay: josh@fairplayforkids.orgJeff Chester, Center for Digital Democracy: jeff@democraticmedia.org Advocates demand Federal Trade Commission investigate Google for continued violations of children’s privacy lawFollowing news of Google’s violations of COPPA and 2019 settlement, 4 advocates ask FTC for investigation BOSTON and WASHINGTON, DC – WEDNESDAY, August 23, 2023 – The organizations that alerted the Federal Trade Commission (FTC) to Google’s violations of the Children’s Online Privacy Protection Act (COPPA) are urging the Commission to investigate whether Google and YouTube are once again violating COPPA, as well as the companies’ 2019 settlement agreement and the FTC Act. In a Request for Investigation filed today, Fairplay and the Center for Digital Democracy (CDD) detail new research from Adalytics, as well as Fairplay’s own research, indicating Google serves personalized ads on “made for kids” YouTube videos and tracks viewers of those videos, even though neither is permissible under COPPA. Common Sense Media and the Electronic Privacy Information Center (EPIC), joined Fairplay and CDD in calling on the Commission to investigate and sanction Google for its violations of children’s privacy. The advocates suggest that the FTC should seek penalties upwards of tens of billions of dollars. In 2018, Fairplay and Center for Digital Democracy led a coalition asking the FTC to investigate YouTube for violating the Children’s Online Privacy Protection Act (COPPA) by collecting personal information from children on the platform without parental consent. As a result of the advocates’ complaint, Google and YouTube were required to pay a then-record $170 million fine in a 2019 settlement with the FTC and comply with COPPA going forward. Rather than getting the required parental permission before collecting personally identifiable information from children on YouTube, Google claimed instead it would comply with COPPA by limiting data collection and eliminating personalized advertising on “made for kids.” But an explosive new report released by Adalytics last week called into question Google’s assertions and compliance with federal privacy law. The report detailed how Google appeared to be surreptitiously using cookies and identifiers to track viewers of “made for kids” videos. The report also documented how YouTube and Google appear to be serving personalized ads on “made for kids” videos and transmitting data about viewers to data brokers and ad tech companies. In response to the report, Google told the New York Times that ads on children’s videos are based on webpage content, not targeted to user profiles. But follow-up research conducted independently by both Fairplay and ad buyers suggests the ads are, in fact, personalized and Google is both violating COPPA and making deceptive statements about its targeting of children. Both Fairplay and the ad buyers ran test ad campaigns on YouTube where they selected a series of users of attributes and affinities for ad targeting and instructed Google to only run the ads on “made for kids” channels. In theory, these test campaigns should have resulted in zero placements, because under Google and YouTube’s stated policy, no personalized ads are supposed to run on “made for kids” videos. Yet, Fairplay’s targeted $10 ad campaign resulted in over 1,400 impressions on “made for kids” channels and the ad buyers reported similar results. Additionally, the reporting Google provided to Fairplay and the ad buyers to demonstrate the efficacy of the ad buys would not be possible if the ads were contextual, as Google claims. “If Google’s representations to its advertisers are accurate, it is violating COPPA,” said Josh Golin, Executive Director of Fairplay. “The FTC must launch an immediate and comprehensive investigation and use its subpoena authority to better understand Google’s black box child-directed ad targeting. If Google and YouTube are violating COPPA and flouting their settlement agreement with the Commission, the FTC should seek the maximum fine for every single violation of COPPA and injunctive relief befitting a repeat offender.” The advocates’ letter urges the FTC to seek robust remedies for any violations, including but not limited to: ·       Civil penalties that demonstrate that continued violations of COPPA and Section 5 of the FTC Act are unacceptable. Under current law, online operators can be fined $50,120 per violation of COPPA. Given the immense popularity of many “made for kids” videos, it is likely millions of violations have occurred, suggesting the Commission should seek civil penalties upwards of tens of billions of dollars.·       An injunction requiring relinquishment of all ill-gotten gains·       An injunction requiring disgorgement of all algorithms trained on impermissibly collected data·       A prohibition on the monetization of minors’ data·       An injunction requiring YouTube to move all “made for kids” videos to YouTube Kids and remove all such videos from the main YouTube platform. Given Google’s repeated failures to comply with COPPA on the main YouTube platform – even when operating under a consent decree – these videos should be cabined to a platform that has not been found to violate existing privacy law·       The appointment of an independent “special master” to oversee Google’s operations involving minors and provide the Commission, Congress, and the public semi-annual compliance reports for a period of at least five yearsKatharina Kopp, Deputy Director of the Center for Digital Democracy, said “The FTC must fully investigate what we believe are Google’s continuous violations of COPPA, its 2019 settlement with the FTC, and Section 5 of the FTC Act. These violations place many millions of young viewers at risk. Google and its executives must be effectively sanctioned to stop its ‘repeat offender’ behaviors—including a ban on monetizing the personal data of minors, other financial penalties, and algorithmic disgorgement. The Commission’s investigation should also review how Google enables advertisers, data brokers, and leading online publisher partners to surreptitiously surveil the online activities of young people. The FTC should set into place a series of ‘fail-safe’ safeguards to ensure that these irresponsible behaviors will never happen again.” Caitriona Fitzgerald, Deputy Director of the Electronic Privacy Information Center (EPIC), said "Google committed in 2019 that it would stop serving personalized ads on 'made for kids' YouTube videos, but Adalytics’ research shows that this harmful practice is still happening. The FTC should investigate this issue and Google should be prohibited from monetizing minors’ data."Jim Steyer, President and CEO of Common Sense Media, said "The Adalytics findings are troubling but in no way surprising given YouTube’s history of violating the kids’ privacy. Google denies doing anything wrong and the advertisers point to Google, a blame game that makes children the ultimate losers. The hard truth is, companies — whether it’s Big Tech or their advertisers — basically care only about their profits, and they will not take responsibility for acting against kids’ best interests. We strongly encourage the FTC to take action here to protect kids by hitting tech companies where it really hurts: their bottom line." ### 
    a red play button with a white arrow by Eyestetix Studio
  • In comments to the Federal Trade Commission, EPIC, the Center for Digital Democracy, and Fairplay urged the FTC to center privacy and data security risks as it evaluates Yoti Inc’s proposed face-scanning tool for obtaining verifiable parental consent under the Children’s Online Privacy Protection Act (COPPA).In a supplementary filing CDD urges the Federal Trade Commission (FTC) to reject the parent-consent method proposed by the applicants Entertainment Software Rating Board (ESRB) and EPIC Games’ SuperAwesome division. Prior to any decision, the FTC must first engage in due diligence and investigate the contemporary issues involving the role and use of facial coding technology and its potential impact on children’s privacy. The commission must have a robust understanding of the data flows and insight generation produced by facial coding technologies, including the debate over their role as a key source of “attention” metrics, which are a core advertising measurement modality. Since this proposal is designed to deliver a significant expansion of children’s data collection—given the constellation of brands, advertisers and publishers involved with the applicants and their child-directed market focus—a digital “cautionary” principle on this consent method is especially required here. Moreover, one of the applicants, as well as several key affiliates of the ESRB—EPIC Games, Amazon, and Microsoft—have recently been sanctioned for violating COPPA, and any approval in the absence of a thorough fact-finding here would be premature. 
  • New research released today by Adalytics raises serious questions about whether Google is violating the Children's Online Privacy Protection Act (COPPA) by collecting data and serving personalized ads on child-directed videos on YouTube. In 2019, in response to a Request for Investigation by Fairplay and the Center for Digital Democracy, the Federal Trade Commission fined Google $170 million for violating COPPA on YouTube and required Google to change its data-collection and advertising practices on child-directed videos. As a result of that settlement, Google agreed to stop serving personalized ads and limit data collection on child-directed videos. Today's report - and subsequent reporting by The New York Times - call into question whether Google is complying with the settlement.  STATEMENTS FROM FAIRPLAY AND CDD:Josh Golin, Executive Director, Fairplay:This report should be a wake-up call to parents, regulators and lawmakers, and anyone who cares about children -- or the rule of law, for that matter. Even after being caught red-handed in 2019 violating COPPA, Google continues to exploit young children, and mislead parents and regulators about its data collection and advertising practices on YouTube. The FTC must launch an immediate and comprehensive investigation of Google and, if they confirm this report's explosive allegations, seek penalties and injunctive relief commensurate with the systematic disregard of the law by a repeat offender. Young children should be able to watch age-appropriate content on the world's biggest video platform with their right to privacy guaranteed, full stop. Jeff Chester, Executive Director, Center for Digital Democracy:Google operates the leading online destination for kids’ video programming so it can reap enormous profits, including through commercial surveillance data and advertising tactics.  It must be held accountable by the FTC for what appears are violations of the Children’s Online Privacy Protection Act and its own commitments.   Leading advertisers, ad agencies, media companies and others partnering with Google appear to have been more interested in clicks than the safety of youth. There is a massive and systemic failure across the digital marketplace when it comes to protecting children’s privacy.   Congress should finally stand up to the powerful “Big Data” ad lobby and enact long-overdue privacy legislation.  Google’s operations must also be dealt with by antitrust regulators.  It operates imperiously in the digital arena with no accountability. The Adalytics study should serve as a chilling reminder that our commercial surveillance system is running amok, placing even our most vulnerable at great risk.
  • CDD tells FTC to apply strong data privacy and security rules for health data

    Filing also focuses on role commercial surveillance marketers play targeting physicians and patients

    The Center for Digital Democracy (CDD) endorses the Federal Trade Commission’s (FTC) proposal to better protect health consumer and patient information in the digital era. CDD warned the commission in 2010, as well as in its 2022 commercial surveillance comments, that health data—including information regarding serious medical conditions—are routinely (and cynically) gathered and used for online marketing. This has placed Americans at risk—for loss of their privacy, health-decision autonomy, and personal financial security. The commercial surveillance health data digital marketing system also triggers major strains on the fiscal well-being of federal and private health insurance systems, creating demand for products and services that can be unnecessary and costly. The commission should “turn off the tap” of data flooding the commercial surveillance marketplace, including both direct and inferred health information. The commission can systemically address the multiple data flows—including those on Electronic Health Record (EHR) systems—that require a series of controls. EHR, or personal health record systems, have served as a digital “Achilles heel” of patient privacy, with numerous commercial entities seizing that system to influence physicians and other prescribers as well as to gain insights used for ongoing tracking. The commercialization of health-connected data is ubiquitous, harvested from mobile “apps,” online accounts, loyalty programs, social media posts, data brokers, marketing clouds and elsewhere. Given the contemporary commercial data analytic affordances to generate insights and actions operational today, information gathered for other purposes can be used to generate health-related data. Health information can be combined with numerous other datasets that can reveal ethnicity, location, media use, etc., to create a robust target marketing profile. As programmatic advertising trade publication “AdExchanger” recently noted, “sensitive health data can be collected or revealed through dozens of noncovered entities, from location data providers to retail media companies. And these companies aren’t prevented from sharing data, unless the data was sourced from a covered entity.” The FTC’s Health Breach Notification Rule (HBNR) proposal comes at an especially crucial time for health privacy in the U.S. A recent report on “The State of Patient Privacy,” as noted by Insider Intelligence/eMarketer in July 2023, shows that a majority of Americans “distrust” the role that “Big Tech Companies” play with their health data. A majority of patients surveyed explained that “they are worried about security and privacy protections offered by vendors that handle their health data.” Ninety-five percent of the patients in the survey “expressed concern about the possibility of data breaches affecting their medical records.” These concerns, we suggest, reflect consumer unease regarding their reliance on the online media to obtain health information. For example, “half of US consumers use at least one health monitoring tool,” and “healthcare journeys often start online,” according to the “Digital Healthcare Consumer 2023” report. There is also a generational shift in the U.S. underway, where at least half of young adults (so-called Generation Z) now “turn to social media platforms for health-related purposes either all the time or often…via searches, hashtags QR codes…[and] have the highest rate of mobile health app usage.” The Covid-19 pandemic triggered greater use of health-related apps by consumers. So-called “telehealth” services generate additional data as well, including for online “lead generation.” The growing use of “digital pharmacies” is being attributed to the rising costs of medications—another point where consumer health data is gathered. The FTC should ensure the health data privacy of Americans who may be especially vulnerable—such as those confronting financial constraints, pre-existing or at-risk conditions, or have long been subjected to predatory and discriminatory marketing practices—and who are especially in need of stronger protections. These should include addressing the health-data-related operations from the growing phalanx of retail, grocery, “dollar,” and drug store chains that are expanding their commercial surveillance marketing operations (so-called “retail media”), while providing direct-to-consumer delivered health services. Electronic Health Record systems are a key part of the health and commercial surveillance infrastructure: EHRs have long served as “prime real estate for marketers…[via] data collection, which makes advanced targeting a built-in benefit of EHR marketing.” EHRs are used to influence doctors and other prescribers relying on what’s euphemistically called point-of-care marketing. Marketing services for pharmaceutical and other life science companies can be “contextually integrated into the EHR workflow [delivered] to the right provider at the right time within their EHR [using] awareness messaging targeted on de-identified real-time data specific to the patient encounter.” Such applications are claimed to operate as “ONC-certified and HIPPA-compliant (ONC stands for “Office of the National Coordinator for Health Information,” HHS). The various, largely unaccountable, methods used to target and influence how physicians treat their patients by utilizing EHRs raise numerous privacy and consumer protection issues. For example, “EHR ads can appear in several places at all the stages along the point-of-care journey,” one company explained. Through an “E-Prescribing Screen,” pharma companies are able to offer “co-pay coupons, patient savings offers and relevant condition brand messaging.” Data used to target physicians, including prescription information derived from a consumer, using EHR systems, help trigger more information from and about a health consumer (think about the subsequent role of drug stores, search engines and social media use, gathering of data for coupons, etc.). This “non-virtuous” circle of health surveillance should be subjected to meaningful health data breach and security safeguards. Patient records on EHRs must be safeguarded and the methods used to influence healthcare professionals require major privacy reforms. Contemporary health data systems reflect the structures that comprise the overall commercial surveillance apparatus, including databrokers, marketing clouds, AI: The use of digital marketing to target U.S. health consumers has long been a key “vertical” for advertisers. For example, there are numerous health-focused subsidiaries run by the leading global advertising agencies, all of which have extensive data-gathering and targeting capabilities. These include Publicis Health: “Our proprietary data and analytics community, paired with the unsurpassed strengths of Sapient and Epsilon allow us to deliver unmatched deterministic, behavioral, and transactional data, powered by AI.” IPG Health uses “a proprietary…media, tech and data engine [to] deliver personalized omnichannel experiences across touchpoints.” Its “comprehensive data stack [is] powered by Acxiom.” Ogilvy Health recently identified some of the key social media strategies used by pharmaceutical firms to generate consumer engagement with their brands—helping generate invaluable data. They include, for example, a “mobile-first creative and design approach,” including the use of “stickers, reels, filters, and subtitles” on Instagram and well as “A/B testing” on Facebook and the use of “influencers.” A broad range of consumer-data-collecting partners also operates in this market, providing information and marketing facilitation. Google, Meta, Salesforce, IQVIA, and Adobe are just a few of the companies integrated into health marketing services designed to “activate customer journeys (healthcare professionals and patients) across physical and digital channels [using] real-time, unified data.” Machine learning and AI are increasingly embedded in the health data surveillance market, helping to “transform sales and marketing outcomes,” for example. The use of social media, AI and machine learning, including for personalization, raises concerns that consent is insufficient alone for the release of patient and consumer health information. The commission should adopt its proposed rule, but also address the system-wide affordances of commercial surveillance to ensure health data is truly protected in terms of privacy and security. The commission should endorse a patient health record information definition that reflects both the range and type of data collected, but also the processes used to gather or generate it. The prompting and inducement of physicians, for example, to prescribe specific medications or treatments to a patient, based on the real-time “point-of-care” information transmitted through EHRs, ultimately generate identifiable information. So any interaction and iterative process used to do so should be covered under the rule, reflecting all the elements involved in that decision-making and treatment determinative process. By ensuring that all the entities involved in this system—including health care services or suppliers—must comply with data privacy and security rules, the commission will critically advance data protection in the health marketplace. This should include health apps, which increasingly play a key role in the commercial data-driven marketing complex. All partnering organizations involved in the sharing, delivering, creating and facilitation of health record information should also be held accountable. We applaud the FTC’s work in the health data privacy area, including its important GoodRx case and its highlighting the role that “dark patterns” play in “manipulating or deceiving consumers.” Far too much of the U.S. health data landscape operates as such a “dark pattern.” The commission’s proposed HBNR rules will illuminate this sector, and, in the process, help secure greater privacy and protection for Americans.
  • Blog

    Profits, Privacy and the Hollywood Strike

    Addressing commercial surveillance in streaming video is key to any deal for workers and viewers says Jeff Chester, the executive director of the Center for Digital Democracy.

    Leading studios, networks and production companies in Hollywood—such as Disney, Paramount, Comcast/NBCU, Warner Bros. Discovery and Amazon—know where their dollars will come from in the future. As streaming video becomes the dominant form of TV in the U.S., the biggest players in the entertainment industry are harvesting the cornucopia of data increasingly gathered from viewers. While some studio chiefs publicly chafe over the demands from striking actors and writers as being unrealistic, they know that their heavy investments in “adtech” will drive greater profitability. Streaming video data not only generates higher advertising and commerce revenues, but also serves as a valuable commodity for the precise online tracking and targeting of consumers.Streaming video is now a key part in what the Federal Trade Commission (FTC) calls the “commercial surveillance” marketplace. Data about our viewing behaviors, including any interactions with the content, is being gathered by connected and “smart” TVs, streaming devices such as Roku, in-house studio and network data mining operations, and by numerous targeting and measurement entities that now serve the industry. For example, Comcast’s NBCUniversal “One Platform” uses what it calls “NBCU ID”—a “first-party identifier [that] provides a persistent indicator of who a consumer is to us over time and across audiences.” Last year it rolled out “200 million unique person-level NBCU IDs mapped to 80 million households.” Disney’s Select advertising system uses a “proprietary Audience Graph” incorporating “100,000 attributes” to help “1800 turnkey” targeting segments. There are 235 million device IDs available to reach, says Disney, 110 million households. It also operates a “Disney Real-time Ad Exchange (DRAX), a data clean room and what it calls “Yoda”—a “yield optimized delivery allocation” empowering its ad server.Warner Bros. Discovery recently launched “WBD Stream,” providing marketers with “seamless access… to popular and premium content.” It also announced partnerships with several data and research companies designed to help “marketers to push consumers further down the path to purchase.” One such alliance involves “605,” which helps WBD track how effective its ads are in delivering actual sales from local retailers, including the use of set-top box data from Comcast as well as geolocation tracking information. Amazon has long supported its video streaming advertising sales, including with its “Freevee” network, through its portfolio of cutting-edge data tools. Among the ad categories targeted by Amazon’s streaming service are financial services, candy and beauty products. One advantage it touts is that streaming marketers can get help from “Amazon’s Ads data science team,” including an analysis of “signals in [the] Amazon Marketing Cloud.”Other major players in video streaming have also supercharged their data technologies, including Roku, Paramount, and Samsung, in order to target what are called “advanced audiences.” That’s the capability to have so much information available that a programmer can pinpoint a target for personalized marketing across a vast universe of media content. While subscription is a critical part of video revenues, programmers want to draw from multiple revenue streams, especially advertising. To help advance the ability of the TV business to have access to more thorough datasets, leading TV, advertising and measurement companies have formed the “U.S. Joint Industry Committee” (JIC). Warner Bros. Discovery, Fox, NBCU, TelevisaUnivision, Paramount, and AMC are among the programmers involved with JIC. They are joined by a powerhouse composed of the largest ad agencies (data holders as well), including Omnicom, WPP and Publicis. One outcome of this alliance will be a set of standards to measure the impact of video and other ads on consumers, including through the use of “Big Data” and cross-platform measurement.Of course, today’s video and filmed entertainment business includes more than ad-supported services. There’s subscription revenue for streaming–said to pass $50 billion for the U.S. this year– as well as theatrical release. But it’s very evident that the U.S. (as well as the global) entertainment business is in a major transition, where the requirement to identify, track and target an individual (or groups of people) online and as much offline as possible is essential. For example, Netflix is said to be exploring ways it can advance its own solution to personalized ad targeting, drawing its brief deal with Microsoft Advertising to a close. Leading retailers, including Walmart (NBCU) and Kroger (Disney), are also part of today’s streaming video advertising landscape. Making the connections to what we view on the screen and then buy at a store is a key selling point for today’s commercial surveillance-oriented streaming video apparatus. A growing part of the revenue from streaming will be commissions from the sale of a product after someone sees an ad and buys that product, including on the screen during a program. For example, as part of its plans to expand retail sales within its programming, NBCU’s “Checkout” service “identifies objects in video and makes them interactive and shoppable.”Another key issue for the Hollywood unions is the role of AI. With that technology already a core part of the advertising industry’s arsenal, its use will likely be integrated into video programming—something that should be addressed by the SAG-AFTRA and WGA negotiations.The unions deserve to capture a piece of the data-driven “pie” that will further drive industry profits. But there’s more at stake than a fair contract and protections for workers. Rather than unleashing the creativity of content providers who are part of a environment promoting diversity, equity and the public interest, the new system will be highly commercialized, data driven, and controlled by a handful of dominant entities. Consider the growing popularity of what are called “FAST” channels—which stands for “free ad supported streaming television.” Dozens of these channels, owned by Comcast/NBCU, Paramount, Fox, and Amazon, are now available, and filled with relatively low-cost content that can reap the profits from data and ads.The same powerful forces that helped undermine broadcasting, cable TV, and the democratic potential of what once was called the “information superhighway”—the Internet—are now at work shaping the emerging online video landscape. Advertising and marketing, which are already the influence behind the structure and affordances of digital media, are fashioning video streaming to be another—and critically important—component fostering surveillance marketing.The FTC’s forthcoming proposed rulemaking on commercial surveillance must address the role of streaming video. And the FCC should open up its own proceeding on streaming, one designed to bring structural changes to the industry in terms of ownership of content and distribution.  There’s also a role for antitrust regulators to examine the data partnerships emerging from the growing collaboration by networks and studios to pool data resources.  The fight for a fairer deal for writers and actors deserves the backing of regulators and the public. But a successful outcome for the strike should be just “Act One” of a comprehensive digital media reform effort. While the transformation of the U.S. TV system is significantly underway, it’s not too late to try to program “democracy” into its foundation. Jeff Chester is the executive director of the Center for Digital Democracy, a DC-based NGO that works to ensure that digital technologies serve and strengthen democratic values and institutions. Its work on streaming video is supported, in part, by the Rose Foundation for Communities and the Environment.This op-ed was initially published by the Tech Policy Press.
    Jeff Chester
  • CFPB Data Broker Filing - - U.S. Public Interest Research Group (PIRG) and Center for Digital Democracy (CDD)

    In response to the Request for Information Regarding Data Brokers and Other Business Practices Involving the Collection and Sale of Consumer Information Docket No. CFPB-2023-0020

  • Press Release

    Transatlantic Consumer Dialogue (TACD) Calling on White House and Administration to Take Immediate Action on Generative AI

    Transatlantic Consumer Dialogue (TACD), a coalition of the leading consumer organizations in North America and Europe, asking policymakers on both side of the Atlantic for action

    The Honorable Joseph R. BidenPresident of the United StatesThe White House1600 Pennsylvania Avenue NWWashington, DC 20500 June 20, 2023  Dear President Biden,We are writing on behalf of the Transatlantic Consumer Dialogue (TACD), a coalition of the leading consumer organizations in North America and Europe, to ask you and your administration to take immediate action regarding the rapid development of Generative Artificial Intelligence in a growing number of applications, such as text generators like ChatGPT, and the risks these entail for consumers. We are calling on policymakers and regulators on both sides of the Atlantic to use existing laws and regulations to address the problematic uses of Generative Artificial Intelligence; adopt a cautious approach to deploying Generative Artificial Intelligence in the public sector; and adopt new legislative measures to directly address Generative Artificial Intelligence harms. As companies are rapidly developing and deploying this technology and outpacing legislative efforts, we cannot leave consumers unprotected in the meantime.  Generative Artificial Intelligence systems are now already widely used by consumers in the U.S. and beyond. For example, chatbots are increasingly incorporated into products and services by businesses. Although these systems are presented as helpful, saving time, costs, and labor, we are worried about serious downsides and harms they may bring about.Generative Artificial Intelligence systems are incentivized to suck up as much data as possible to train the AI models, leading to inclusion of personal data that may be irremovable once the sets have been established and the tools trained. Where training models include data that is biased or discriminatory, those biases become baked into the Generative Artificial Intelligence’s outputs, creating increasingly more biased and discriminatory content that is then disseminated. The large companies making advances in this space are already establishing monopolistic market concentration. Running Generative Artificial Intelligence tools requires enormous amounts of water and electricity, leading to heightened carbon emissions. The speed and volume of information creation with these technologies speeds the generation and spread of increasing misinformation and disinformation. Three of our members (Public Citizen, The Electronic Privacy Information Center, and The Norwegian Consumer Council) have already published reports setting forth the specific harms of Generative Artificial Intelligence and proposing steps to counter these harms – we would be happy to discuss these with you. In addition, TACD has adopted policy principles which we believe are key to safely deploying Generative Artificial Intelligence. Our goal is to provide policymakers,                                lawmakers, enforcement agencies, and other relevant entities with a robust starting point to ensure that Generative Artificial Intelligence does not come at the expense of consumer, civil, and human rights.  If left unchecked, these harms will become permanently entrenched in the use and development of Generative Artificial Intelligence. We are calling for actions that insist upon transparency, accountability, and safety in these Generative Artificial Intelligence systems, including ensuring that discrimination, manipulation, and other serious harms are eliminated. Where uses of GAI are clearly harmful or likely to be clearly harmful, they must be barred completely.  In order to combat the harms of Generative Artificial Intelligence, your administration must ensure that existing laws are enforced wherever they apply. New regulations must be passed that specifically address the serious risks and gaps in protection identified in the reports mentioned above. Companies and other entities developing Generative Artificial Intelligence must adhere to transparent and reviewable obligations. Finally, once binding standards are in place, the Trade and Technology Council must not undermine those binding standards.We welcome the administration’s efforts on AI to protect Americans’ rights and safety, particularly your efforts to center civil rights, via executive action. Furthermore, we are encouraged to see the leading enforcement agencies underscore their collective commitment to leverage their existing legal authorities to protect the American people. But more must be done, and soon, especially for those already disadvantaged and the most vulnerable, including people of color and others who have been historically underserved and marginalized, as well as children and teenagers. We want to work with you to ensure that privacy and other consumer protections remain at the forefront of these discussions, even when new technology is involved.Sincerely, Finn Lützow-Holm Myrstad                                Director of Digital Policy, Norwegian Consumer European Co-Chair of TACD’s Digital Policy Calli SchroederSenior Counsel and Global Privacy Counsel, EPIC U.S. Co-Chair of TACD’s Digital PolicyTransatlantic Consumer Dialogue (TACD)Rue d’Arlon 80, B-1040 Brussels  Tel. +32 (0)2 743 15 90  www.tacd.org  @TACD_ConsumersEC register for interest representatives: identification number 534385811072-96                                       
  • Press Release

    Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teens

    Letter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data

    Contact:David Monahan, Fairplay: david@fairplayforkids.orgKatharina Kopp, Center for Digital Democracy: kkopp@democraticmedia.org Advocates call for FTC action to rein in Meta’s abusive practices targeting kids and teensLetter from 31 organizations in tech advocacy, children’s rights, and health supports FTC action to halt Meta’s profiting off of young users’ sensitive data BOSTON/ WASHINGTON DC–June 13, 2023– A coalition of leading advocacy organizations is standing up today to support the Federal Trade Commission’s recent order reining in Meta’s abusive practices aimed at kids and teens.  Thirty-one groups, led by the Center for Digital Democracy, the Electronic Privacy Information Center (EPIC), Fairplay, and U.S. PIRG, sent a letter to the FTC saying “Meta has violated the law and its consent decrees with the Commission repeatedly and flagrantly for over a decade, putting the privacy of all users at risk. In particular, we support the proposal to prohibit Meta from profiting from the data of children and teens under 18. This measure is justified by Meta’s repeated offenses involving the personal data of minors and by the unique and alarming risks its practices pose to children and teens.”  Comments from advocates: Katharina Kopp, Director of Policy, Center for Digital Democracy:“The FTC is fully justified to propose the modifications of Meta’s consent decree and to require it to stop profiting from the data it gathers on children and teens.  There are three key reasons why.  First, due to their developmental vulnerabilities, minors are uniquely harmed by Meta’s failure to comply repeatedly with its 2012 and 2020 settlements with the FTC, including its non-compliance with the federal children’s privacy law (COPPA); two, because Meta has failed for many years to even comply with the procedural safeguards required by the Commission, it is now time for structural remedies that will make it less likely that Meta can again disregard the terms of the consent decree; and three, the FTC must affirm its credibility and that of the rule of law and ensure that tech giants cannot evade regulation and meaningful accountability.” John Davisson, Director of Litigation, Electronic Privacy Information Center (EPIC): "Meta has had two decades to clean up its privacy practices after many FTC warnings, but consistently chose not to. That's not 'tak[ing] the problem seriously,' as Meta claims—that's lawlessness. The FTC was right to take decisive action to protect Meta's most vulnerable users and ban Meta from profiting off kids and teens. It's no surprise to see Meta balk at the legal consequences of its many privacy violations, but this action is well within the Commission's power to take.” Haley Hinkle, Policy Counsel, Fairplay: “Meta has been under the FTC's supervision in this case for over a decade now and has had countless opportunities to put user privacy over profit. The Commission's message that you cannot monetize minors' data if you can't or won't protect them is urgent and necessary in light of these repeated failures to follow the law. Kids and teens are uniquely vulnerable to the harms that result from Meta’s failure to run an effective privacy program, and they can’t wait for change any longer.” R.J. Cross, Director of U.S. PIRG’s Don’t Sell My Data campaign: “The business model of social media is a recipe for unhappiness. We’re all fed content about what we should like and how we should look, conveniently presented alongside products that will fix whatever problem with our lives the algorithm has just helped us discover. That’s a hard message to hear day in and day out, especially when you’re a teen. We’re damaging the self-confidence of some of our most impressionable citizens in the name of shopping. It’s absurd. It’s time to short circuit the business model.”  ###
    a white and blue square with a blue and white facebook logo by Dima Solomin
  • “By clarifying what types of data constitute personal data under COPPA, the FTC ensures that COPPA keeps pace with the 21st century and the increasingly sophisticated practices of marketers,” said Katharina Kopp, Director of Policy at Center for Digital Democracy.“As interactive technologies evolve rapidly, COPPA must be kept up to date and reflect changes in the way children use and access these new media, including virtual and augmented realities. The metaverse typically involves a convergence of physical and digital lives, where avatars are digital extension of our physical selves. We agree with the FTC that an avatar’s characteristics and its behavior constitute personal information. And as virtual and augmented reality interfaces allow for the collection of extensive sets of personal data, including sensitive and biometric data, this data must be considered personal information under COPPA. Without proper protections this highly coveted data would be exploited by marketers and used to further manipulate and harm children online.”
    person holding black game controller by Hardik Sharma
  • Contact: Katharina Kopp, kkopp [at] democraticmedia.org“We welcome the FTC ‘s action to address the rampant commercial surveillance of children via Internet of Things (IoT) devices, such as Amazon’s Echo, and for enforcing existing law,” said Katharina Kopp, Director of Policy at Center for Digital Democracy. “Children’s data is taken away from them illegally and surreptitiously on a massive scale via IoT devices, including their voice recordings and data gleaned from kids’ viewing, reading, listening, and purchasing habits. These violations in turn lead to further exploitation and manipulation of children and teens. They lead to violating their privacy, to manipulating them into being interested in harmful products, to undermining their autonomy and hooking them to digital media, and to perpetuating discrimination and bias. As Commissioner Bedoya’s separate statement points out, with this proposed order the FTC warns companies that they cannot take data from children and teens (and others) illegitimately to develop even more sophisticated methods to take advantage of them. Both the FTC and the Department of Justice must hold Amazon accountable.”
    white and black Amazon Echo Dot 2 by Find Experts at Kilta.com
  • FACT SHEETSummary of the Kids Online Safety ActAs Congressional hearings, media reports, academic research, whistleblower disclosures, and heartbreaking stories from youth and families have repeatedly shown, social media platforms have exacerbated the mental health crisis among children and teens fostering body image issues, creating addiction-like use, promoting products that are dangerous for young audiences, and fueling destructive bullying.  The Kids Online Safety Act (KOSA) provides children, adolescents, and parents with the tools, safeguards, and transparency they need to protect against threats to young people's health and wellbeing online. The design and operation of online platforms have a significant impact on these harms, such as recommendation systems that send kids down rabbit holes of destructive content, and weak protections against relentless bullying.KOSA would provide safeguards and accountability through:   Creating a duty of care for social media platforms to prevent and mitigate specific dangers to minors in their design and operation of products, including the promotion of suicidal behaviors, eating disorders, substance use, sexual exploitation, advertisements for tobacco and alcohol, and more.Requiring social media platforms to provide children and adolescents with options to protect their information, disable addictive product features, and opt out of algorithmic recommendations. Platforms are required to enable the strongest settings by default.  Giving parents new tools to help support their children and providing them (as well as schools) a dedicated reporting channel to raise issues (such as harassment or threats) to the platforms.How Online Harms Impact LGBTQ+ CommunitiesSocial media can be an important tool for self-discovery, expression, and community. However, online platforms have failed to take basic steps to protect their users from profound harm and have put profit ahead of safety. Companies have operationalized their products to keep young users on their sites for as long as possible, even if the means to get people to use their platforms more are harmful. From documents provided by a whistleblower, Facebook’s own researchers described Instagram itself as a “perfect storm” that “exacerbates downward spirals” and produces hundreds of millions of dollars in revenue annually.  This “perfect storm” has been shown by academic research and surveys to weigh most profoundly on LGBTQ+ children and adolescents, who are more at risk of bullying, threats, and suicidal behaviors on social media. Some harms and examples of the protections KOSA would provide include:  LGBTQ+ youth are more at risk of cyberbullying and harassment.LGBTQ+ high school students consistently report higher rates of cyberbullying than their heterosexual peers, and suffer more severe forms of harassment, such as stalking, non-consensual imagery, and violent threats.Surveys have found that 56% of LGBTQ+ students had been cyberbullied in their lifetime compared to 32% for non-LGBTQ+ students.One in three young LGBTQ+ people have said that they had been sexually harassed online, four times as often as other young people.  LGBTQ+ youth are more at risk for eating disorders and substance use.Young LGBTQ+ people experience significantly greater rates of eating disorders and substance use compared to their heterosexual and cisgender peers. Transgender and nonbinary youth are at even higher risk for eating disorders, and Black LGBTQ+ youth are diagnosed at half the rate of their white peers.Prolonged use of social media is linked with negative appearance comparison, which in turn increases risk for eating disorder symptoms.Engagement-based algorithms feed extreme eating disorders through recommending more eating disorder content to vulnerable users (every click or view sends more destructive content to a user).For example, TikTok began recommending eating disorder content within 8 minutes of creating a new account and Instagram was found to deluge a new user with eating disorder recommendations within one day.How KOSA Will Help:KOSA would require that platforms give users the ability to turn off engagement-based algorithms or options to influence the recommendation they receive. A user would be able to stop recommendation systems that are sending them toxic content.  KOSA’s duty of care requires platforms to prevent and mitigate cyberbullying. It also requires that platforms give users options to restrict messages from other users and to make their profile private.It would require platforms to provide a point of contact for users to report harassment and mandates platforms respond to these reports within a designated time frame.  LGBTQ+ youth are more at risk of suicide and suicidal behaviors.Young people exposed to hateful messaging online in tandem with self-harm material on social media, increases the risk of suicidal behaviors and/or suicide.These risks are exacerbated when platform recommendation systems amplify hateful content and self-harm content.For example, after creating a new teen account on TikTok, suicide content was recommended under three minutes.Surveys have found 42% of LGBTQ+ youth seriously considered attempting suicide, including more than half of transgender and nonbinary youth.Moreover, eating disorders, depression, bullying, substance use, and other mental health harms that fall harder on LGBTQ+ communities further increase risks of self-harm and suicide.  How KOSA Will Help:In addition to the core safeguards and options provided to kids, such as controls and transparency over algorithmic recommendation systems, KOSA’s duty of care would require platforms consider and address the ways in which their recommendation systems promote suicide and suicidal behaviors, creating incentives for the platforms to provide self-help resources, uplift information about recovery, and prevent their algorithms from pushing users down rabbit holes of harmful and deadly content.Protections for LGBTQ+ CommunitiesThe reintroduction of the Kids Online Safety Act takes into account recommended edits from a diverse group of organizations, researchers, youth, and families.The outcome from experts in the field and those with lived experience is a thoughtful and tailored bill designed to be a strong step in advancing a core set of accountability provisions to provide children, adolescents, and families with a safer online experience. Below is a summary comparing previous bill text and changes that were made for reintroduction.Concerns with Previous DraftHow Current Draft Protects LGBTQ+The “duty of care” is too vague, creating liabilities for broad and undefined harms to children and teens.The duty of care is now limited to a set of specific harms that have been shown to be exacerbated by online platforms’ product designs and algorithms. Specific harms are focused on serious threats to the wellbeing of young users, such as, eating disorders, substance use, depression, anxiety, suicidal behaviors, physical violence, sexual exploitation, and the marketing of narcotics, tobacco, gambling, alcohol. The terms used to describe those harms are linked to clinical or legal definitions where there  is a perceived risk of misuse. In addition, the duty of care includes a limitation to ensure it is not construed to require platforms to block access to content that a young user specifically requests or block access to evidence-informed medical information and support resources.The inclusion of “grooming” in the duty of care could be weaponized against entities providing information about gender-affirming care.“Grooming” was cut from the bill. Sexual exploitation and abuse are now defined using existing federal criminal statutes to prevent politicalization or distortion of terms.The duty of care to prevent and mitigate “self-harm” or “physical harm” could be weaponized against trans youth and those who provide information about gender-affirming care.The specific reference to “self-harm” has been removed from the duty of care. “Physical harm” has been changed to “physical violence” to enhance clarity. Other covered harms related to “self-harm” are covered using terminology that is anchored in a medical definition.Will allow non-supportive parents to surveil LGBTQ+ youth online.The legislation clarifies the tools available to protect kids and differentiates the developmental differences between children and young teens.KOSA has always included requirements that children and adolescents are notified if parental controls are turned on, and required kids know before parents are informed about creating a new account. For teens, the bill requires platforms to give parents the ability to restrict purchases, view metrics on how much time a minor is spending on a platform and view - but not change - account settings. It does not require the disclosure of a minor’s browsing behavior, search history, messages, or other content or metadata of their communications.KOSA will lead to privacy-invasive age verification across the internet.KOSA never required age verification or gating, nor did it create liability for companies if kids lie about their age.The bill explicitly states that companies are not required to age-gate or collect additional data to determine a user’s age.Additionally, a knowledge standard is more consistently applied across the bill for the purpose of clarifying that companies are not liable if they have no knowledge whether a user is a child or adolescent.KOSA will affect access to sexual health information, schools, or nonprofit services.KOSA requirements only apply to commercial online platforms, such as social media and games that have been the largest source of issues for kids online.Nonprofits, schools, and broadband services are exempt from KOSA and a previous reference to “educational services” was removed from the “covered platform” definition of the bill.KOSA does not apply to health sites or other information resources.
    group of people under garment by Mercedes Mehling
  • The Honorable Joseph R. BidenPresident of the United StatesThe White House1600 Pennsylvania Avenue NWWashington, DC 20500May 23, 2023Dear President Biden:The undersigned civil rights, consumer protection, and other civil society organizations write to express concern about digital trade negotiations underway as part of the proposed Indo-Pacific Economic Framework (IPEF).Civil society advocates and officials within your own administration have raised increasing concern about discrimination, racial disparities, and inequities that may be “baked into” the algorithms that make decisions about access to jobs and housing, health care, prison sentencing, educational opportunity, insurance rates and lending, deployment of police resources, and much more. To address these injustices, we have advocated for anti-discrimination protections and algorithmic transparency and fairness. We have been pleased that these concepts are incorporated into your recent Executive Order on racial equity,1 as well as the White House’s AI Bill of Rights2 and many other policy proposals. The DOJ, FTC, CFPB, and EEOC also recently released a joint statement underscoring their commitment to combating discrimination in automated systems.3 Any trade agreement must be consistent with, and not undermine, these policies and the values they are advancing.Now, we have learned that the U.S. may be considering proposals for IPEF and other trade agreement negotiations that could sabotage efforts to prevent and remedy algorithmic discrimination, including provisions that could potentially preempt executive and Congressional legal authority to advance these goals. Such provisions may make it harder or impossible for Congress or executive agencies to adopt appropriate policies while also respecting our international trade commitments. For example, trade provisions that guarantee digital firms new secrecy rights over source code and algorithms could thwart potential algorithmic impact assessment and audit requirements, such as testing for racial bias or other violations of U.S. law and regulation. And because the trade negotiations are secret, we do not know how the exact language could affect pivotal civil rights protections. Including such industry-favored provisions in trade deals like IPEF would be a grievous error and undermine the Administration’s own policy goals. We urge the administration to not submit any proposals that could undermine the ability to protect the civil rights of people in the United States, particularly with regard to digital trade. Moreover, there is a great need for transparency in these negotiations. Text already proposed should be made public so the civil rights community and relevant experts can challenge any provisions that could undermine administration goals regarding racial equity, transparency, and fairness. We know that your administration shares our goals of advancing racial equity, including protecting the public from algorithmic discrimination. Thank you for your leadership in this area. For questions or further discussion, please contact Harlan Yu (harlan@upturn.org), David Brody (dbrody@lawyerscommittee.org), and Emily Peterson-Cassin (epetersoncassin@citizen.org).Sincerely,American Civil Liberties Union Center for Democracy & Technology Center for Digital Democracy Data & Society Research Institute Demand Progress Education Fund Electronic Privacy Information Center (EPIC) Fight for the Future Lawyers’ Committee for Civil RightsUnder LawThe Leadership Conference on Civil andHuman Rights NAACPNational Urban League Public Citizen Sikh American Legal Defense andEducation Fund UpturnCC:Secretary of Commerce Gina Raimondo U.S. Trade Representative Katherine TaiNational Economic Council Director Lael BrainardNational Security Advisor Jake SullivanDomestic Policy Council Director Susan RiceIncoming Domestic Policy Council Director Neera TandenDomestic Policy Council Deputy Director for Racial Justice and Equity Jenny Yang1 Exec. Order No. 14091, 88 Fed. Reg. 10825, Feb. 16, 2023, available at https://www.federalregister.gov/documents/2023/02/22/2023-03779/further-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal.2 The White House, Blueprint for an AI Bill of Rights, Oct. 22, 2022, available at https://www.whitehouse.gov/ostp/ai-bill-of-rights.3 Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, CFPB, DOJ, EEOC, FTC, April 25, 2023, available at https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
    woman in white tank top and white shorts standing on gray concrete road during daytime by Clay Banks
  • CDD urges Congress to adopt stronger online safeguards for kids and teensContact: Katharina Kopp, kkopp [at] democraticmedia.orgThe Children’s Online Privacy Protection Act (COPPA 2.0), introduced by Senators Markey and Cassidy, will provide urgently needed online safeguards for children and teens. It will enact real platform accountability and limit the economic and psychological exploitation of children and teens online and thus address the public health crisis they are experiencing.By banning targeted ads to young people under 16, the endless streams of data collected by online companies to profile and track them will be significantly reduced. The ability of digital marketers and platforms to manipulate, discriminate, and exploit children and teens will be curtailed. COPPA 2.0 will also extend the original COPPA law protections for youth from 12 to 16 years of age.  The proposed law provides the ability to delete children’s and teen’s data with a click of an “eraser button.”  With the creation of a new FTC "Youth Marketing and Privacy Division,” COPPA 2.0 will ensure young peoples’ privacy rights are enforced.
  • Meta’s Virtual Reality-based Marketing Apparatus Poses Risks to Teens and OthersWhether it’s called Facebook or Meta, or known by its Instagram, WhatsApp, Messenger or Reels services, the company has always seen children and teens as a key target. The recent announcement opening(link is external) up the Horizon Worlds metaverse(link is external) to teens, despite calls to first ensure it will be a safe and healthy experience, is lifted out of Facebook’s well-worn political playbook—make whatever promises necessary to temporarily quell any political opposition to its monetization plans. Meta’s priorities are intractably linked to its quarterly shareholder revenue reports. Selling our “real” and “virtual” selves to marketers is their only real source of revenue, a higher priority than any self-regulatory scheme Meta offers(link is external) claiming to protect children and teens.Meta’s focus on creating more immersive, AI/VR, metaverse-connected experiences for advertisers should serve as a “wake-up” call for regulators. Meta has unleashed a digital environment designed to trigger the “engagement(link is external)” of young people with marketing, data collection and commercially driven manipulation. Action is required to ensure that young people are treated fairly, and not exposed to data surveillance, threats to their health and other harms.Here are a few recent developments that should be part of any regulatory review of Meta and young people:Expansion of “immersive(link is external)” video and advertising-embedded applications: Meta tells marketers it provides “seamless video experiences that are immersive and fueled by discovery,” including the “exciting(link is external) opportunity for advertisers” with its short-video “Reels” system. Through virtual reality (VR) and augmented reality (AR(link is external)) technologies, we are exposed to advertising content designed to have a greater impact by influencing our subconscious and emotional processes. With AR ads, Meta tells(link is external) marketers, they can “create immersive experiences, encourage people to virtually try out your products and inspire people to interact with your brand,” including encouraging “people who interact with your ad… [to]take photos or videos to share their experience on Facebook Feed, on Facebook and Instagram Stories or in a message on Instagram.” Meta has also been researching(link is external) the use of AR(link is external) and VR(link is external) that will ensure that its ad and marketing messaging becomes even more compelling.Expanded integration of ads throughout Meta applications: Meta allows advertisers to “turn organic image and video posts into ads in Ads Manager on Facebook Reels,” including adding a “call-to-action” feature. It permits marketers to “boost their Reels within the Instagram app to turn them into ads….” It enables marketers “to add a “Send Message” button to their Facebook Reels ads [that] give people an option to start a conversation in WhatsApp(link is external) right from the ad.” This follows last year’s Meta “Boosted Reels” product(link is external) release, allowing Instagram Reels to be turned into ads as well.“Ads Manager” “optimization(link is external) goals” that are inappropriate when used for targeting young people: These include “impressions, reach, daily unique reach, link clicks and offsite conversions.” “Ad placements” to target teens are available for the “Facebook Marketplace, Facebook Feed, … Facebook Stories, Facebook-instream video (mobile), Instagram Feed, Instagram Explore, Instagram Stories, Facebook Reels and Instagram Reels.”The use of metrics for delivering and measuring the impact of augmented reality ads: As Meta explains, it uses:(link is external)Instant Experience View Time: The average total time in seconds that people spent viewing an Instant Experience. An Instant Experience can include videos, images, products from a catalog, an augmented reality effect and more. For an augmented reality ad, this metric counts the average time people spent viewing your augmented reality effect after they tapped your ad.Instant Experience Clicks to Open: The number of clicks on your ad that open an Instant Experience. For an augmented reality ad, this metric counts the number of times people tapped your ad to open your augmented reality effect.Instant Experience Outbound Clicks: The number of clicks on links in an Instant Experience that take people off Meta technologies. For an augmented reality ad, this metric counts the number of times people tapped the call to action button in your augmented reality effect.Effect Share: The number of times someone shared an image or video that used an augmented reality effect from your ad. Shares can be to Facebook or Instagram Stories, to Facebook Feed or as a message on Instagram.These ad effects can be designed and tested(link is external) through Meta’s “Spark Hub” and ad manager. Such VR and other measurement systems require regulators to analyze their role and impact on youth.Expanded use of machine learning/AI to promote shopping via Advantage(link is external)+: Last year, Meta rolled out “Advantage+ shopping campaigns, Meta’s machine-learning capabilities [that] save advertisers(link is external) time and effort while creating and managing campaigns. For example, advertisers can set up a single Advantage+ shopping campaign, and the machine learning-powered automation automatically combines prospecting and retargeting audiences, selects numerous ad creative and messaging variations, and then optimizes for the best-performing ads.” While Meta says that Advantage+ isn’t used to target teens, it deploys(link is external) it for “Gen Z” audiences. How Meta uses machine learning/AI to target families should also be on the regulatory agenda.Immersive advertising will shape the near-term evolution of marketing, where brands will be “world agnostic and transcend the limitations of the current physical and digital space.” The Advertising Research Foundation (ARF) predicts(link is external) that “in the next decade, AR and VR hardware and software will reach ubiquitous status.” One estimate is that by 2030, the metaverse will “generate(link is external) up to $5 trillion in value.”In the meantime, Meta’s playbook in response to calls from regulators and advocates is to promise some safeguards, often focused on encouraging the use of what it calls “safety(link is external) tools.” But these tools(link is external) do not ensure that teens aren’t reached and influenced by AI- and VR-driven marketing technologies and applications. Meta also knows that today, ad-targeting is less important than so-called “discovery(link is external),” where its purposeful melding of its video content, AR effects, social interactions and influencer marketing will snare young people into its marketing “conversion”(link is external) net.Last week, Mark Zuckerberg told(link is external) investors his vision of bringing “AI agents to billions of people,” as well as into his “metaverse” that will be populated by “avatars, objects, worlds, and codes to tie” online and offline together. There will be, as previously reported, an AI-driven “discovery(link is external) engine” that will “increase the amount of suggested content to users.”These developments reflect just a few of the AI- and VR-marketing-driven changes to the Meta system. They illustrate why responsible regulators and advocates must be in the forefront of holding this company accountable, especially with regard to its youth-targeting apparatus.Please also read(link is external) Fairplay for Kids’ account of Meta’s long history of failing to protect children online.   metateensaivr0523fin.pdf
    Jeff Chester
  • Reining In Meta’s Digital ‘Wild West’ as FTC protects young people’s safety, health and privacyContacts:Jeff Chester, CDD, 202-494-7100David Monahan, Fairplay, 781-315-2586Children’s advocates Fairplay and Center for Digital Democracy respond to today’s announcement that the FTC proposes action to address Facebook’s privacy violations in practices impacting children and teens.  And see important new information compiled by Fairplay and CDD, linked below.Josh Golin, executive director, Fairplay:The action taken by the Federal Trade Commission against Meta is long overdue. For years, Meta has flouted the law and exploited millions of children and teens in their efforts to maximize profits, with little care as to the harms faced by young users on their platforms. The FTC has rightly recognized Meta simply cannot be trusted with young people’s sensitive data and proposed a remedy in line with Meta’s long history of abuse of children. We applaud the Commission for its efforts to hold Meta accountable and for taking a huge step toward creating the safe online ecosystem every young American deserves.Jeff Chester, executive director, Center for Digital Democracy:Today’s action by the Federal Trade Commission (FTC) is a long-overdue intervention into what has become a huge national crisis for young people. Meta and its platforms are at the center of a powerful commercialized social media system that has spiraled out of control, threatening the mental health and wellbeing of children and adolescents. The company has not done enough to address the problems caused by its unaccountable data-driven commercial platforms. Amid a continuing rise in shocking incidents of suicide, self-harm and online abuse, as well as exposés from industry “whistleblowers,” Meta is unleashing even more powerful data gathering and targeting tactics fueled by immersive content, virtual reality and artificial intelligence, while pushing youth further into the metaverse with no meaningful safeguards. Parents and children urgently need the government to institute protections for the “digital generation” before it is too late. Today’s action by the FTC limiting how Meta can use the data it gathers will bring critical protections to both children and teens. It will require Meta/Facebook to engage in a proper “due diligence” process when launching new products targeting young people—rather than its current method of “release first and address problems later approach.” The FTC deserve the thanks of U.S parents and others concerned about the privacy and welfare of our “digital generation.”NEW REPORTS:META HAS A LONG HISTORY OF FAILING TO PROTECT CHILDREN ONLINE(link is external)(from Fairplay)META’S VIRTUAL REALITY-BASED MARKETING APPARATUS POSES RISKS TO TEENS AND OTHERS(from CDD)
     by
  • Advocates Fairplay, Eating Disorders Coalition, Center for Digital Democracy, and others announce support of the newly reintroduced Kids Online Safety ActContact:David Monahan, Fairplay (david@fairplayforkids.org)Advocates pledge support for landmark bill requiring online platforms to protect kids, teens with “safety by design” approachAdvocates Fairplay, Eating Disorders Coalition, Center for Digital Democracy, and others announce support of the newly reintroduced Kids Online Safety ActBOSTON, MA and WASHINGTON, DC — May 2, 2023 — Today, a coalition of leading advocates for children’s rights, health, and privacy lauded the introduction of the Kids Online Safety Act (KOSA), a landmark bill that would create robust online protections for children and teens online. Among the advocates pledging support for KOSA are Fairplay, Eating Disorders Coalition, the American Academy of Pediatrics, the American Psychological Association, and Common Sense.KOSA, a bipartisan bill from Senators Richard Blumenthal (D-CT) and Martha Blackburn (R-TN), would make online platforms and digital providers abide by a “duty of care” requiring them to eliminate or mitigate the impact of harmful content on their platforms. The bill would also require platforms to default to the most protective settings for minors and enable independent researchers to access “black box” algorithms to assist in research on algorithmic harms to children and teens.The reintroduction of the Kids Online Safety Act coincides with a rising tide of bipartisan support for action to protect children and teens online amidst a growing youth mental health crisis. A February report from the CDC showed that teen girls and LGBTQ+ youth are facing record levels of sadness and despair, and another report from Amnesty International indicated that 74% of youth check social media more than they’d like.Fairplay Executive Director, Josh Golin:“For far too long, Big Tech have been allowed to play by their own rules in a relentless pursuit of profit, with little regard for the damage done to the children and teens left in their wake. Companies like Meta and TikTok have made billions from hooking kids on their products by any means necessary, even promoting dangerous challenges, pro-eating disorder content, violence, drugs, and bigotry to the kids on their platforms. The Kids Online Safety Act stands to change all that. Today marks an exciting step toward the internet every young person needs and deserves, where children and teens can explore, socialize and learn without being caught in Big Tech crossfire.”National Alliance for Eating Disorders CEO and EDC Board Member, Johanna Kandel:“The Kids Online Safety Act is an integral first step in making social media platforms a safer place for our children. We need to hold these platforms accountable for their role in exposing our kids to harmful content, which is leading to declining mental health, higher rates of suicide, and eating disorders. As both a CEO of an eating disorders nonprofit and a mom of a young child, these new laws would go a long way in safeguarding the experiences our children have online.”Center for Digital Democracy Deputy Director, Katharina Kopp:“The Kids Online Safety Act (KOSA), co-sponsored by Senators Blumenthal and Blackburn, will hold social media companies accountable for their role in the public health crisis that children and teens experience today. It will require platforms to make better design choices that ensure the well-being of young people. KOSA is urgently needed to stop online companies operating in ways that encourage self-harm, suicide, eating disorders, substance use, sexual exploitation, patterns of addiction-like behaviors, and other mental and physical threats.  It also provides safeguards to address unfair digital marketing tactics. Children and teens deserve an online environment that is safe. KOSA will significantly reduce the harms that children, teens, and their families experience online every day.”Children and Screens: Institute of Digital Media and Children Development Executive Director, Kris Perry:“We appreciate the Senators’ efforts to protect children in this increasingly complicated digital world. KOSA will allow access to critical datasets from online platforms for academic and research organizations. This data will facilitate scientific research to better understand the overarching impact social media has on child development."###kosa_reintro_pr.pdf