AI Forensics’ cover photo
AI Forensics

AI Forensics

Civic and Social Organizations

AI Forensics is a European non-profit that investigates influential and opaque algorithms to defend digital rights.

About us

AI Forensics (previously known as Tracking Exposed) is a European non-profit that investigates influential and opaque algorithms. We hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms. Our innovative data-driven methodology provides journalists, researchers, and policymakers with timely evidence of systematic violations of users' interests and digital rights, particularly for minority groups and communities that are often overlooked in the design of technology. We develop our own auditing tools, which we release as free software to empower the research community, and strengthen the AI audit ecosystem. We believe that consistent and coordinated scrutiny is the path to restoring the balance of power between big tech platforms and its users. We have 6 years of experience building free-software to investigate influential platforms and algorithms thanks to user-data donations. Our infrastructure has supported dozens of research projects and audits across multiple platforms (YouTube, TikTok, PornHub, Amazon, Glovo, Deliveroo…), leading to tangible impact in tech regulation and strategic litigations.

Industry
Civic and Social Organizations
Company size
11-50 employees
Type
Nonprofit
Founded
2021
Specialties
Algorithmic Auditing, AI, Social Media, and Responsible AI

Employees at AI Forensics

Updates

  • 🚨 NEW REPORT: Meta's Failing Ad Moderation Puts European Users at Risk Our latest AI Forensics investigation reveals alarming findings about Meta's ad ecosystem: 📊 46,000+ fraudulent health advertisements approved by Meta  👁️ 292+ million impressions delivered to European users  🔍 Violations of at least 15 of Meta's own advertising policies  ⚠️ Potential breaches of multiple DSA provisions Since August 2023, Meta has approved thousands of ads featuring celebrity deepfakes, impersonation of medical professionals, and misleading health claims targeting vulnerable populations with unapproved "cures" for weight loss, joint pain, diabetes, and more. Most concerning? This activity continues well into 2025 despite DSA obligations to mitigate such risks. Read the full report here: https://lnkd.in/gDf-Br8P #DigitalServicesAct #PlatformAccountability #MetaAdvertising #ConsumerProtection #HealthScams

  • Reflecting on this pivotal year for AI Forensics, we have much to celebrate! Over the past year, we have reached a new stage of organizational maturity, solidifying our position as a leading expert in algorithmic auditing and platform accountability. Our unique evidence collection pipeline and methodologies have enabled us to drive tangible impact—shaping platform behavior, strengthening regulation enforcement, and informing critical policy debates. Yet, our accomplishments unfold against a rapidly shifting and deeply concerning techno-political landscape. As Marc Faddoul, our Co-Founder and Director, puts it: "Technology companies, previously motivated primarily by profit, are now increasingly colluding with state actors to serve political interests. At the same time, they are promoting the unchecked deployment of AI systems with far-reaching societal repercussions." Our role has never been more critical in response to these challenges. We remain committed to exposing algorithmic harms and providing the technical expertise and evidence necessary for a technological future that upholds fundamental rights and democratic values.

  • View organization page for AI Forensics

    2,704 followers

    📢 Latest Report on Doppelgänger on The New York Times 📰 In collaboration with Checkfirst & Reset, our latest report exposes how the Kremlin-linked 'Social Design Agency' bypassed restrictions to post over 8,000 political ads on Facebook, reaching hundreds of thousands of users across Europe. Despite sanctions in the U.S. and E.U., this covert influence operation spent an estimated $338,000 on ads, raising critical questions about compliance with global regulations. The implications are clear: Social media platforms must take stronger, proactive measures to combat disinformation, and regulatory bodies must enforce compliance. 📖 Read the New York Times article here: https://lnkd.in/e4JpMrsZ

  • AI Forensics reposted this

    The latest investigation from AI Forensics uncovers a glaring double standard in how Meta enforces its content moderation rules. Here's what they found: https://lnkd.in/dGJGybum #ICYMI Last year, the EU Commission launched investigation proceedings into Meta's breaches on political advertising, citing AI Forensics' "No Embargo In Sight" report. Learn about the impact of this research at https://lnkd.in/dFydhdZP or in the carousel below ⤵️

  • 🚨 "Billionaire-proofing" Social Media: Free our Feeds Campaign 🚨 Social media should work for us—not for billionaires. The Free Our Feeds campaign is on a mission to build a new, independent foundation for an entire ecosystem of interconnected apps and different companies that have people’s interests at heart. By supporting the development of the AT Protocol, we can: - Empower people to create innovative, interoperable apps. - Promote transparency and accountability in digital platforms. - Give users more control over their online experiences. Help us raise $30 million to fund this independent infrastructure & share our mission to build a public-focused social media future! It’s time for social media to evolve into something better. Learn more and join the movement at freeourfeeds.com.

  • "How did several thousand ads featuring excerpts from particularly explicit pornographic films end up on mainstream social media platforms Facebook and Instagram, where sexual content is strictly forbidden?" This is the question we ask in our report "Pay-to-Play: Meta’s Community (Double) Standards on Pornographic Ads". We found thousands of pornographic ads on Meta Ad Library. When we uploaded the same pornographic visuals from a user account, they get immediately removed by Meta for violating their content policy. Meta's algorithms can identify sexually explicit content and remove it when posted from a user account, yet the same content is approved and actively distributed to millions of users when it goes through their advertisement system. See our findings on Le Monde: https://lnkd.in/dHGjWJcu

  • "The report, published Wednesday by the European nonprofit research group AI Forensics, found that Meta has run at least 3,316 unique advertisements featuring explicit adult images and videos over the past year in Europe, reaching as many as 8 million users. The ads, many of which feature graphic sexual acts, were shown mostly to men over the age of 44, thanks to the company’s ad-targeting systems." Read our findings covered by The Washington Post here: https://lnkd.in/dyKjT8rP

  • 🚨 New Report: Pay-to-Play: Meta’s Community (Double) Standards on Pornographic Ads 🚨 Our latest investigation reveals a striking double standard in Meta’s content moderation practices. Despite the platform’s strict Community Standards, over 3,000 pornographic advertisements featuring explicit adult content have been approved and distributed through the Meta Ad Library in the past year. Key findings: - Pornographic visuals removed when uploaded by regular users were approved as advertisements and shown to millions. - Many ads featured AI-generated media, including celebrity deepfakes like that of French actor Vincent Cassel. - Ads promoting dubious products and websites reached over 8 million impressions in the EU, primarily targeting men aged 44+. This isn’t a technical glitch—it’s a systemic issue that prioritizes ad revenue over user safety. Meta has the technology to detect explicit content but chooses not to apply it uniformly to ads. Read the full report here: https://lnkd.in/dGJGybum

  • AI Forensics reposted this

    🔍 Un rapport détaillé d'AI Forensics révèle les failles systémiques dans la modération du nouveau ChatGPT Search d'OpenAI (qui sait aller rechercher sur internet). L'étude, menée sur deux périodes cruciales (6 et 11 novembre 2024), a testé trois dimensions critiques : la désinformation politique, la propagande électorale, et la circulation des contenus de médias sanctionnés en UE (en particulier russes). https://lnkd.in/eWdx_SEF Leurs expérimentations ont mis en lumière une modération hautement incohérente selon trois axes : - Sur les contenus électoraux : ChatGPT Search montre une précision relativement acceptable sur les résultats factuels, mais c’est surtout la modération qui fluctue de manière imprévisible. Le 6 novembre, le système refusait de répondre dans 30% des cas avec un message d'avertissement. Le 11 novembre, ces garde-fous avaient totalement disparu. - Sur la production de propagande : Le système a démontré une certaine robustesse en refusant de générer des contenus de désinformation ou de propagande électorale explicite. Cependant, AI Forensics note que des acteurs malveillants pourraient potentiellement contourner ces protections via des techniques de "jailbreak". - Sur les médias sanctionnés : C'est ici que les failles sont les plus criantes. Non seulement le système permet d'accéder à des contenus de médias bannis (RT, Sputnik, TV Centre International, RTR Planeta, Russia 24), mais il crée une dangereuse confusion en attribuant parfois des articles légitimes de Reuters ou AP à ces médias d'État. Cette étude s'inscrit dans une série d'alertes d'AI Forensics sur les risques des chatbots couplés aux moteurs de recherche. Si Google et Microsoft ont progressivement renforcé leurs systèmes suite aux précédentes études, il est préoccupant de voir OpenAI déployer un outil grand public reproduisant des vulnérabilités connues, particulièrement en période électorale majeure. Marc Faddoul Natalia Stanusch Miazia Schüler Raziye Buse Çetin

  • We recently co-signed the Centre for Democracy & Technology Europe's civil society statement, which calls for enhanced and meaningful transparency from VLOP and VLOSE risk assessments as a mandatory facet of the Digital Services Act. Our shared statement first demands that VLOPs and VLOSEs publish detailed methodologies for their risk assessments, outlining precisely how they have been conducted, as well as their scope and the definitions that they adopt in regard to understanding systemic risks. We also call for a wider range of considerations that include: publishing the full list of risks that have been identified through assessment; publishing the full list of measures taken to address identified risks; and detailing, with full transparency, the involvement of internal departments and external stakeholders in the completion of all risk assessments. The statement and its co-signatories can be found in its entirety here: https://lnkd.in/eC52Vewq

Similar pages

Browse jobs