What’s New in Google Accessibility | Episode 9

As someone deeply passionate about making technology accessible for everyone, I’m excited to share the latest updates from Google’s ongoing commitment to accessibility. These innovations are designed to empower people with disabilities, making digital experiences more inclusive and easier to navigate. In this article, I will walk you through several groundbreaking advancements across Google’s ecosystem—from sign language understanding to AI-powered tools for the visually impaired, enhanced screen reader capabilities, and new features that support people with limited dexterity. These updates span across Google’s products, including Gemini, Android, Pixel, Chrome, Chromebook, and Workspace.
Let’s dive into the remarkable progress Google has made, which not only highlights their dedication to accessibility but also opens new doors for community-driven innovations and user empowerment.
🤟 Introducing SignGemma: Revolutionizing Sign Language Understanding
One of the most inspiring announcements from Google I/O this year is the upcoming release of SignGemma. This is an open model designed to understand and translate sign language into spoken language text. While the primary focus is on American Sign Language (ASL) and English, SignGemma is built to be adaptable to other sign languages worldwide.
What sets SignGemma apart is its openness. By making the core technology publicly available, Google invites experts and communities from around the globe to fine-tune the model for their own languages and cultural contexts. This approach fosters a collaborative environment where the deaf and hard of hearing community can actively participate in shaping accessible technology that meets their unique needs.
SignGemma leverages the power of Gemini, Google’s advanced AI model, which means it can handle complex sign language interpretations with increasing accuracy. Importantly, while SignGemma is a powerful tool, it’s designed not to replace human interpreters but to bridge gaps when live interpretation isn’t available.
Imagine a world where people who rely on sign language can communicate more seamlessly in everyday situations—whether at work, in education, or social settings—thanks to this technology. SignGemma is a significant step toward that vision, empowering users with real-time translation and fostering greater inclusion.
📱 Enhancing Android Accessibility with Gemini and TalkBack
Android users will soon enjoy deeper integration between Gemini’s AI capabilities and TalkBack, Android’s built-in screen reader. This integration doesn’t just enhance image descriptions anymore—it provides a comprehensive description of the entire screen and allows users to engage in conversational interactions with their device.
For example, if you’re shopping online and looking at a variety of outfits, you can ask TalkBack to describe all the options visible on your screen. You can even follow up with questions like, “Which of these outfits are on sale?” or “Show me the ones in a particular style.” This conversational AI interaction makes browsing more intuitive and accessible for people who are blind or have low vision.
This feature marks a meaningful advancement in how AI can provide context and understanding beyond static descriptions, enabling users to interact with content in a dynamic and personalized way.
🎙️ Expressive Captions on Android: Capturing Nuance and Emotion
Last year, we introduced Expressive Captions as a feature of Live Caption on Android, and this year, it’s gotten even better. Expressive Captions automatically capture the intensity and emotional nuance of speech in most audio and video content encountered on your phone.
What’s new? The addition of a duration feature that signals when someone is dragging out a word for emphasis. Think about a sports announcer excitedly calling out an amazing shot with a prolonged “nooooo!” or a statement where the tone drastically changes the meaning, like a firm “no.”
Moreover, the updated Expressive Captions recognize a broader range of sounds, including whispering, yawning, throat clearing, and more. This feature works with live or streaming content that doesn’t have preloaded captions, making it incredibly useful for real-time accessibility.
With these enhancements, people who are deaf or hard of hearing can enjoy a richer, more immersive experience, picking up on subtle vocal cues that convey emotion and intention—elements that are often missing in traditional captions.
🔍 Pixel’s Magnifier App Gets Smarter with Live Search
For Pixel users who are blind or have low vision, the Magnifier app just became an even more powerful tool. The app now features live search, which allows users to type in what they’re looking for instead of having to take a picture first.
Imagine being at an airport and wanting to find your gate number, or scanning a menu at a restaurant to locate a specific dish. You simply type your search term, and as you move your phone around, the Magnifier highlights matches on the screen. The phone even vibrates to notify you when it detects what you’re searching for. This real-time feedback speeds up the process of finding important information in your environment.
This feature is a game-changer for independent navigation and accessibility in everyday situations, reducing reliance on others and increasing confidence for users.
👁️ Project Astra: AI-Powered Visual Interpreter for the Blind and Low Vision Community
Originally unveiled last year, Project Astra is a research prototype exploring the potential of a universal AI assistant. This year, we’re thrilled to announce the launch of the Project Astra visual interpreter, developed in collaboration with the professional visual interpreting firm Aira.
This exciting new tool is designed specifically for people who are blind or have low vision. It allows users to use their phone camera to scan their surroundings and ask questions like, “What do you see in this room?” or “Tell me when you see a backpack.” The AI responds in real time, describing the environment, locating items, or reading signs.
What makes Project Astra especially reliable is that every session is supervised by a live Aira agent to ensure safety and quality. This blend of AI and human oversight offers a powerful, trustworthy experience for users.
While still experimental, Project Astra represents a significant leap forward in assistive technology, bringing us closer to an AI that truly understands and supports the needs of the blind and low vision community. For those interested, there’s an opportunity to sign up as trusted testers through the Project Astra microsite.
📄 Chrome Accessibility: Optical Character Recognition for Scanned PDFs
Interacting with scanned PDFs has long been a challenge for screen reader users because these documents were often treated as images, making it impossible to highlight, copy, or search text. That’s changing.
Chrome now includes built-in optical character recognition (OCR) that automatically detects scanned PDFs and converts them into readable and interactive text. This means you can use your screen reader to read these documents naturally, just like any other web page, and perform actions like highlighting or searching for specific words.
This update significantly improves accessibility for users who rely on screen readers and frequently encounter scanned documents online, such as academic papers, official forms, or digitized books.
🖱️ Chromebook Accessibility: New Features for Ease of Use
Chromebook users with accessibility needs will appreciate several new features designed to enhance usability:
- Touchpad Off: This feature allows users to disable the touchpad, which is particularly useful for those using screen readers who might accidentally trigger unwanted clicks.
- Flash Notifications: When a new notification arrives, the screen flashes to alert users. This is especially helpful for people who are hard of hearing or those using screen magnification who might miss traditional notification alerts.
- Bounce Keys: This feature ignores repeated keystrokes within a short time frame, preventing accidental multiple inputs caused by tremors or unsteady hands.
- Slow Keys: Keys must be held down for a set duration before they register, helping prevent unintended characters from being entered by mistake.
- Mouse Keys: For users who experience difficulty or pain using a traditional mouse, this feature allows control of the mouse pointer through the keyboard’s numeric keypad.
These thoughtful additions show Google’s commitment to making computing accessible to people with a wide range of physical abilities.
📅 Workspace Update: Accessible Embedded Google Calendars
Google Workspace users can now embed interactive Google Calendars into websites with a host of accessibility improvements. These embedded calendars are fully screen reader compatible, making them accessible to users who rely on assistive technology.
Additional enhancements include:
- Improved spacing to make text easier to read
- A responsive layout that adapts smoothly to different screen sizes, ensuring usability on both desktop and mobile devices
- Keyboard shortcuts to enable quick and easy navigation through the calendar without relying on a mouse
These updates ensure that everyone, regardless of their abilities, can access scheduling information and manage events seamlessly, whether on personal websites, organizational portals, or community pages.
🌟 Conclusion: Google’s Ongoing Commitment to Accessibility
Google’s latest accessibility updates highlight a clear and inspiring vision: to create digital experiences that are inclusive, intuitive, and empowering for all users. From the groundbreaking SignGemma model that supports diverse sign languages to AI-powered tools like the Project Astra visual interpreter and live search capabilities in the Pixel Magnifier app, these innovations are transforming how people with disabilities interact with technology.
Enhancements to TalkBack on Android, expressive captions, and new Chrome and Chromebook accessibility features further demonstrate Google’s holistic approach to accessibility—addressing the needs of users with visual, hearing, and dexterity impairments.
And with improvements in Google Workspace, accessibility extends into productivity and collaboration, ensuring that no one is left behind in professional or personal environments.
I encourage everyone to explore these new features and consider how they might enhance your own or your community’s digital experiences. Accessibility is not just a feature—it’s a fundamental part of building technology that works for everyone.
For more detailed information and to stay updated on future releases, you can sign up for Google’s accessibility newsletter or visit their dedicated accessibility resources online.
Thank you for joining me on this journey through the latest in Google Accessibility. Together, we can foster a more inclusive digital world.