Generative AI and Accessibility: U.S. Access Board and Other Hearings
Echoing our previous input on the use of generative AI to enable accessibility and assistive technologies—including contributions to NIST and PCAST open calls—we were invited to contribute input to the hearings organized by the U.S. Access Board and the Center for Democracy and Technology. The hearings support the Access Board’s work in fulfilling the Executive Order on the Use of Artificial Intelligence to engage with disability community members and AI practitioners to learn about the risks and benefits of AI.
Updated: Our perspective was also cited by the Consumer Technology Association during the hearings before the Federal Communications Commission, addressing the Twenty-First Century Communications and Video Accessibility Act of 2010 (CVAA).
Related
NIST: Generative AI Profile (AI 600-1) and AI Risk Management Framework (AI RMF)
WEF: Generative AI’s Potential for Accessibility (1, 2, 3, 4, 5, 6)
Technical Applications and Implementation
Large Language Models for Adaptive Communication
Transformer-based language models use attention mechanisms to enable adaptive, personalized dialogue—crucial for users with cognitive impairments, autism, or speech difficulties. These models, ranging from compact (7 billion parameters) to large-scale (175 billion+), can be fine-tuned on disability-specific data using efficient techniques like LoRA and QLoRA, which allow personalization without retraining the entire model. Newer architectures also support extended conversations by remembering user history and preferences over long interactions.
Vision-Language Models for Environmental Understanding
VLMs including CLIP, BLIP-2, and LLaVA demonstrate capabilities for real-time visual scene understanding and narration. These models process interleaved vision and text data through cross-attention mechanisms, and may enable sub-100ms latency inference pipelines on edge devices. Applications include instantaneous scene description for blind users, object recognition systems, and situational awareness tools that incorporate spatial reasoning via semantic scene graphs.
Multimodal Generation for Alternative Format Creation
Text-to-speech models like Bark, VALL-E, and Tortoise TTS could enable dynamic content conversion for users with visual impairments or reading disabilities. These neural audio synthesis systems support voice cloning, emotional tone adaptation, and multilingual output. Integration with document processing pipelines allows automatic conversion of academic papers, legal documents, and web content into accessible audio formats with preserved semantic structure.
Computer Vision for Sign Language Processing
Specialized models including SignBERT and Sign Language Transformer architectures process American Sign Language (ASL) and other sign languages through pose estimation and temporal sequence modeling. These systems employ MediaPipe for hand tracking combined with transformer-based sequence-to-text translation, potentially enabling real-time interpretation services and educational applications for deaf and hard-of-hearing users.
Generative Models for Cognitive Assistance
GPT-based models fine-tuned for cognitive support applications show potential to assist users with memory impairments, ADHD, and executive function disorders. Technical implementation involves prompt engineering frameworks that break complex tasks into manageable steps, memory-augmented architectures for context persistence, and integration with calendar and reminder systems through standardized APIs.
Adaptive Interface Generation
Models like LayoutLM and UI-focused language models could enable automatic generation of accessible user interfaces. These systems analyze user interaction patterns and accessibility needs to dynamically adjust font sizes, color contrasts, navigation structures, and input modalities. Implementation involves real-time DOM manipulation and CSS generation based on WCAG 2.1 AA compliance requirements.
Safety and Training Environments
Deployment of generative AI in accessibility contexts requires comprehensive safety frameworks encompassing both controlled training environments and regulatory oversight. Simulators including Habitat, Isaac Sim, and specialized accessibility-focused platforms may enable safe, scalable training of generative AI assistive agents through physics-accurate simulations with photorealistic rendering. These high-fidelity environments support testing across varied disability scenarios, incorporating assistive device interactions, wheelchair navigation dynamics, and adaptive interface behaviors through advanced physics engines including PhysX and Bullet.
Robust datasets capturing disability-specific interactions and communication patterns are essential for training systems that serve accessibility needs effectively. Current datasets like Common Voice and OpenAssistant require significant augmentation with disability-focused annotations encompassing multimodal accessibility scenarios, speech pattern variations, and alternative communication methods. Annotation standards must include accessibility features, communication preferences, and interaction success metrics across diverse disability types and intersectional identities to ensure inclusive model development.
Access Board oversight should ensure regulatory sandboxes encompass bias detection systems specifically calibrated for disability-related datasets, preventing perpetuation of historical exclusion patterns. The regulatory framework requirements should integrate multiple compliance dimensions including Section 508, ADA requirements, and emerging AI safety standards while incorporating medical device regulations for health-related applications. Data protection frameworks must address sensitive health information through differential privacy and federated learning approaches, ensuring user safety and privacy throughout the development and deployment lifecycle.
Input:
Establish accessibility-focused testbeds modeled around disability-specific use cases (e.g., cognitive load assessment, prosthetic device integration, emergency communication protocols) with standardized APIs for assistive technology developers
Support comprehensive dataset curation capturing real-world accessibility interactions with privacy-preserving annotation standards for augmentative communication patterns and cognitive assistance scenarios
Launch specialized regulatory testbeds for accessibility-focused generative AI applications, providing controlled environments for testing Section 508 compliance, bias mitigation, and emergency communication effectiveness
These efforts reflect a broader commitment to ensuring that generative AI serves every citizen. By advancing technical standards, regulatory alignment, and dataset curation with the disability community at the center, we can help shape a future where assistive technologies are not an afterthought, but a driver of digital equity. Continued multi-stakeholder engagement will be essential to realizing this vision across public, private, and civic sectors.
• • •
References
¹ U.S. Access Board. "Artificial Intelligence." Official U.S. Access Board AI page. 2024.
² The White House. "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." Executive Order 14110. October 30, 2023.
³ National Institute of Standards and Technology. "Generative AI Profile for the NIST AI Risk Management Framework." NIST AI 600-1. April 2024.
⁴ U.S. Congress. "Twenty-First Century Communications and Video Accessibility Act of 2010." Public Law 111-260. 2010.
⁵ U.S. Access Board. "Section 508 Standards." 36 CFR Part 1194.
⁶ U.S. Department of Justice. "Americans with Disabilities Act of 1990." Public Law 101-336. 1990.
⁷ U.S. Access Board. "U.S. Access Board Holds Public Hearings as Part of Developing Artificial Intelligence (AI) Equity, Access & Inclusion for All Series." August 28, 2024.
⁸ U.S. Access Board. "U.S. Access Board Presents Preliminary Findings on Artificial Intelligence (AI) for Disability Community and AI Practitioners." January 14, 2025.