AI Enabled Crimes: Policing and Designated Groups (Sciences Po)
Echoing our recent session at Sciences Po, dedicated to the Geopolitics of AI and security, we joined a series of discussions in cooperation with law enforcement and policing departments. As AI systems grow more sophisticated and pervasive, they are increasingly being weaponized to target vulnerable populations—including people with disabilities, elderly individuals, and children.
The intersection of artificial intelligence and criminal activity presents urgent and complex challenges for communities that often rely on digital systems for essential services, communication, and independence—particularly in civil and high-risk contexts. While AI technologies hold significant promise for accessibility and inclusion, that potential is frequently undermined by both malicious misuse and structural design flaws. These include recognition errors—such as failures to process facial differences or atypical body features. Studies found that facial analysis algorithms exhibited error rates of up to 34% for individuals with paralysis or craniofacial conditions, and up to a 50% word error rate when interpreting dysarthric speech. Additional issues include cues identification errors, which affect users with visual or auditory impairments, and aids-associated errors, where assistive devices disrupt system recognition.
Elder fraud alone reached $4.8 billion in losses last year, marking a 14% increase from the previous year. Voice cloning scams now affect one in four adults, with more than 75% of victims reporting financial loss. As deepfake technologies become increasingly accessible to low-skilled actors, fraudulent activity has surged, and AI-generated identity fraud has already exceeded $30 billion globally. These threats are further compounded by the hijacking of AI-driven security systems, assistive technologies, and mobility platforms—broadening the attack surface across both public and private infrastructures.
Related
Sovereign AI: Assistive Technologies And Critical Digital Capacities
AI Pact and the Next Steps for Implementing the AI Act: Access And Public Systems
OECD: AI For Assistive Technology And Labour (Report and Repository)
NIST: Generative AI Profile (AI 600-1) and AI Risk Management Framework (AI RMF)
Access Board and Other Hearings: Multimodal Models and Accessibility
AI-Aided Scams and Impersonation
Voice cloning and deepfake technologies have democratized sophisticated impersonation attacks, making them accessible to criminals with limited technical expertise. These systems analyze up to 25,000 micro-expressions and voice cues, creating convincing synthetic content that can deceive vulnerable targets. Deepfake fraudulent identity verification attempts have surged by 3,000% over the past year, while 25% of adults have experienced an AI voice scam, with 77% of victims losing money.
Elderly individuals face particular risk from AI-generated voice clones of family members in emergency scams, where criminals use synthesized voices to request urgent financial assistance. The emotional manipulation combined with technological sophistication creates attacks that bypass traditional skepticism.
Document and Border Fraud: AI-generated synthetic identities now enable large-scale document fraud, with particular implications for immigration systems where vulnerable populations may face increased scrutiny due to algorithmic bias. These systems can create convincing false documentation that exploits weaknesses in verification processes.
Financial Targeting: AI-powered scams specifically target individuals with cognitive disabilities or limited digital literacy, using personalized approaches that exploit known vulnerabilities. These attacks often leverage publicly available accessibility information to craft more convincing deceptions.
Identity Theft Through Synthetic Identities
Synthetic identity fraud represents one of the fastest-growing financial crimes, with AI enabling the creation of entirely fabricated personas that can pass verification systems. Synthetic identity fraud costs have exploded from $8 billion in 2020 to over $30 billion currently, with losses projected to reach $23 billion by 2030. These synthetic identities now account for 85-95% of all fraud losses, making it the fastest-growing fraud category. Criminal networks exploit the fact that only 2% of transport data includes disability-specific information and 84% of smart city designs overlook sensory and cognitive disability needs, creating gaps in verification systems.
Biometric Spoofing: Advanced AI systems can now generate synthetic biometric data that bypasses security systems, with particular implications for individuals who rely on assistive technologies that may interact unpredictably with authentication systems. Over 35% of facial recognition systems tested in independent audits showed reduced accuracy for people with facial anomalies or assistive devices.
Hijacking of Assistive Technologies
The interconnected nature of modern assistive technologies creates new attack vectors that criminals exploit to target users with disabilities. AI-driven dashboards for children with autism may have multiple interfaces with different structures and complexities, and over 68% of cognitive-assistive tech trials involved multimodal ecosystems integrating at least two devices.
Companion Robot Manipulation: Social robots and AI companions designed for elderly care or autism support can be hijacked to collect sensitive personal information or manipulate users into harmful behaviors. The trust relationship between vulnerable users and their assistive technologies makes these attacks particularly devastating.
Infrastructure Vulnerabilities: Smart home systems designed for accessibility can be compromised to monitor, harass, or control vulnerable residents. Criminals may exploit the dependency relationship between users and their assistive technologies to maintain ongoing access and control.
Hijacking of AI-Driven Security Systems
Automated policing and security systems create new opportunities for criminal exploitation, particularly targeting communities that face disproportionate surveillance. These attacks can involve:
Drone and Tracking System Manipulation: Criminals can exploit vulnerabilities in automated surveillance systems to avoid detection while targeting vulnerable populations in their homes or communities.
False Flag Operations: AI systems can be manipulated to generate false alerts or evidence that disproportionately targets marginalized communities, exploiting existing biases in law enforcement algorithms.
Virtual Kidnapping and Psychological Manipulation
AI-generated content enables sophisticated psychological manipulation attacks that exploit the specific vulnerabilities of targeted populations. Nearly 40% of adults with cognitive disabilities report difficulties in distinguishing misinformation online, making them particularly vulnerable to psychological manipulation. Elder fraud has reached crisis levels, with seniors losing $4.8 billion to scammers in 2024 alone - a 14% increase in complaints filed with the FBI by elderly victims over the previous year.
Virtual kidnapping schemes use AI-generated audio or video to convince family members that loved ones are in danger, exploiting emotional bonds and creating artificial urgency that bypasses rational decision-making processes.
Addictive and Manipulative Design
AI systems specifically designed to exploit cognitive vulnerabilities represent a form of technological abuse that disproportionately affects vulnerable populations. The DSA describes "dark patterns" as practices that "materially distort or impair" users' ability to make "autonomous and informed choices," providing language to advocate against designs that particularly affect vulnerable users.
Predatory Algorithmic Targeting: Machine learning systems can identify and exploit psychological vulnerabilities, creating personalized manipulation strategies that target individuals with mental health conditions, cognitive disabilities, or substance abuse histories.
Systemic Recognition Errors as Weaponized Exclusion
Criminals exploit known biases and errors in AI recognition systems to systematically exclude or harm vulnerable populations. The discriminatory impact is stark: facial recognition error rates are just 0.8% for light-skinned men but reach 34.7% for darker-skinned women, demonstrating how AI bias creates systematic vulnerabilities that criminals can exploit. Facial analysis algorithms had error rates of up to 34% for people with paralysis and craniofacial conditions, and up to 50% word error rate when interpreting dysarthric speech.
These systematic failures can be weaponized to:
Deny access to essential services
Create false documentation of non-compliance
Generate pretexts for discriminatory enforcement actions
Healthcare and Education System Exploitation
Model Poisoning in Healthcare: Criminals can introduce biased or harmful data into medical AI systems that disproportionately affect vulnerable populations, leading to misdiagnosis or inappropriate treatment recommendations.
Educational AI Manipulation: AI tutoring and assessment systems can be compromised to provide false information or create discriminatory educational outcomes that particularly harm students with disabilities or from marginalized communities.
Unauthorized Data Scraping and Privacy Violations
74% of people with disabilities are concerned about their data being misused in AI systems without proper consent, and children are the most vulnerable as their data may be collected or inferred without awareness.
Criminal networks specifically target disability-related data for:
Insurance fraud and discrimination
Benefits fraud schemes
Medical identity theft
Targeted scam development
AI-Driven Misinformation Networks
Sophisticated misinformation campaigns use AI to generate content specifically designed to exploit the information processing differences of neurodivergent individuals or people with cognitive disabilities. These campaigns often:
Echo Chamber Creation: AI algorithms create personalized misinformation environments that exploit confirmation bias and limited media literacy among vulnerable populations.
Synthetic Media Campaigns: Deepfake videos and synthetic audio target specific communities with misinformation designed to promote harmful behaviors or create social division.
AI in Judicial Decisions and Assessments
The use of AI in legal and administrative decision-making creates new opportunities for systemic discrimination and abuse. Article 58 of the AI Act notes that "natural persons applying for or receiving essential public assistance benefits are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities."
Algorithmic Bias Exploitation: Criminals and corrupt officials can exploit known biases in judicial AI systems to achieve discriminatory outcomes against vulnerable populations.
Assessment Manipulation: AI systems used for disability assessments, child custody decisions, or benefit determinations can be manipulated to produce systematically biased results.
Legal and Regulatory Framework
The EU AI Act provides several relevant protections for vulnerable populations: Article 5(1)(b) prohibits AI that exploits vulnerabilities: "the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation."
The Digital Services Act requires risk assessments that specifically address vulnerable groups and mandates platforms to assess systemic risks including "any actual or foreseeable negative effects" on vulnerable groups through Article 34.
Mitigation and Protection Strategies
Enhanced Authentication: Multi-modal verification systems that account for assistive technology use and disability-related variations in biometric data.
Vulnerability-Aware Design: AI systems must be designed with explicit consideration of how they might be exploited to harm vulnerable populations.
Community-Based Monitoring: Collaborative approaches that involve disability advocacy organizations and community groups in identifying and responding to AI-enabled crimes.
Regulatory Sandboxes: Testing protocols for safety, transparency, and bias mitigation tailored to assistive contexts, including adversarial testing, edge case evaluation, and long-term reliability assessment.
Legal Protections: Strengthened enforcement of existing disability rights laws in digital contexts, with specific provisions for AI-enabled crimes.
Implementation Timeline and Recommendations
Immediate Actions (0-6 months):
Establish specialized cybercrime units trained in disability-related AI crimes
Develop reporting mechanisms accessible to vulnerable populations
Create rapid response protocols for AI-enabled attacks on assistive technologies
Medium-term Goals (6-24 months):
Implement mandatory security standards for assistive AI systems
Establish vulnerability disclosure programs for disability-focused technologies
Develop AI literacy programs tailored for vulnerable populations
Long-term Objectives (2-5 years):
Create comprehensive legal frameworks addressing AI crimes against vulnerable populations
Establish international cooperation mechanisms for cross-border AI crimes
Develop AI systems specifically designed to detect and prevent exploitation of vulnerable users
The intersection of AI technology and criminal activity creates new forms of vulnerability that disproportionately affect already marginalized populations. Addressing these challenges requires coordinated action across technical, legal, and social domains, with vulnerable communities at the center of solution development rather than as afterthoughts in security planning.
References
¹ European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)." Official Journal of the European Union. June 13, 2024.
² European Union. "Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services (Digital Services Act)." Official Journal of the European Union. October 19, 2022.
³ Federal Bureau of Investigation. "2023 Internet Crime Report: Elder Fraud and Synthetic Identity Schemes." IC3 Annual Report. 2024.
⁴ National Institute of Standards and Technology. "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (AI 600-1)." NIST AI 600-1. July 26, 2024.
⁵ United Nations Office on Drugs and Crime. "Cybercrime and AI: New Challenges for Vulnerable Populations." Global Report on Cybercrime. 2024.
⁶ Cybersecurity and Infrastructure Security Agency. "Artificial Intelligence Crimes Against Critical Infrastructure." CISA Advisory. 2024.
⁷ World Health Organization. "AI-Related Crimes in Healthcare Settings: Protecting Vulnerable Patients." WHO Technical Report. 2024.
⁸ European Commission. "Protecting Vulnerable Groups from AI-Enabled Crimes: Policy Guidelines." European Commission Staff Working Document. 2024.