What’s New in Google Accessibility: Exciting Advances in Sign Language, Android, Pixel, Chrome, and Workspace

Featured

Hello, friends! I’m Eve Anderson, and I’m thrilled to share the latest updates from Google Accessibility that I recently discussed in our newest episode. Google continues to innovate and push the boundaries of technology to make digital experiences more inclusive for everyone. In this article, I’ll take you through some fantastic new features and products launching across various Google platforms, including Gemini, Android, Pixel, Chrome, Chromebook, and Workspace.

Whether you’re someone with a disability, a developer passionate about accessible tech, or simply curious about how AI and assistive tools are evolving, you’ll find plenty of exciting news here. Let’s dive in!

🤟 Introducing SignGemma: A Breakthrough in Sign Language Understanding

One of the most groundbreaking announcements this year at Google I/O is the upcoming release of SignGemma. This is an open model developed within Google’s Gemini AI framework, designed specifically for sign language understanding. What makes SignGemma truly special is that it focuses primarily on American Sign Language (ASL) and English but is built with the flexibility to support other sign languages as well.

SignGemma is not just another closed-off AI model; it’s an open model, which means experts and communities around the world can fine-tune it for their own specific sign languages. This community-driven approach opens endless possibilities for creating accessible technology tailored by and for the Deaf and hard of hearing community.

Imagine a tool that can translate sign language into spoken language text in real time, breaking communication barriers in everyday situations. While SignGemma is not intended to replace human interpreters, it can be a valuable resource for people who don’t have immediate access to professional sign language interpretation.

As someone deeply invested in accessibility, I find this development incredibly promising. It empowers communities to take ownership of their communication tools and fosters innovation that respects cultural and linguistic diversity within sign languages globally.

📱 Enhancing Android Accessibility with Gemini and TalkBack

Moving on to Android, we’ve integrated Gemini’s AI capabilities more deeply into the TalkBack screen reader, a vital tool for blind and low vision users. Now, TalkBack doesn’t just describe individual images—it can provide a comprehensive description of the entire screen’s content. That’s a huge leap forward in accessibility.

What does this mean in real life? Picture yourself shopping online for clothes. Instead of painstakingly navigating through each product, you can ask TalkBack to describe all the outfits displayed on the screen. You can even follow up with specific questions like, “Which outfits are on sale?” or “Show me items in a certain style.” This conversational interaction makes digital navigation more natural, efficient, and enjoyable.

Another Android feature that’s been enhanced is Expressive Captions, part of Live Caption. Introduced last year, Expressive Captions automatically capture the intensity and nuance of speech in audio and video content on your phone. Now, with a new duration feature, it can identify when someone drags out sounds for emphasis, such as a sports announcer’s dramatic “Noooo!” or when someone says “no” with a specific tone.

This update also broadens the range of sounds it detects, including whispers, yawns, and throat-clearing. What’s more, it works seamlessly with live or streaming content that doesn’t have preloaded captions, making real-time accessibility more robust than ever.

🔍 Pixel’s Magnifier App Gets Live Search for Easier Navigation

For Pixel users, the Magnifier app just became more powerful. The new Live Search feature helps blind and low vision users find objects or information in their environment more quickly and intuitively.

Instead of snapping a picture and then searching, you can type what you’re looking for—like a gate number at the airport or a specific dish on a menu—and the app will highlight matching items in real time as you move your phone around. Additionally, your phone will vibrate when it detects a match, giving you immediate feedback without needing to constantly look at the screen.

This feature significantly reduces the effort and time required to locate essential information in unfamiliar or complex environments, empowering users to navigate with greater confidence and independence.

🦮 Project Astra: Visual Interpreter for the Blind and Low Vision Community

Last year, I introduced you to Project Astra, a research prototype exploring the possibilities of a universal AI assistant. The response from the blind and low vision community was overwhelmingly positive, and many expressed interest in becoming trusted testers. This year, I’m excited to share the next step: the Project Astra Visual Interpreter.

Developed in collaboration with Aira, a professional visual interpreting firm, this prototype allows users to scan their surroundings using their phone camera and ask questions like “What do you see in this room?” or “Tell me when you see a backpack.” Project Astra responds in real time, describing the environment, locating items, or reading signs to the user.

Every session is supervised by a live Aira agent to ensure safety and quality, which is crucial given the experimental nature of this technology. This collaboration brings us closer to an AI assistant that genuinely understands and supports the needs of the blind and low vision community.

If you’re interested in participating as a trusted tester, you can sign up through the Project Astra microsite. Your feedback will help shape the future of AI-powered accessibility tools designed to improve everyday life.

📄 Making PDFs Accessible with Optical Character Recognition in Chrome

Now, let’s talk about Chrome’s accessibility improvements. Previously, scanned PDFs opened in Chrome were treated as images, which meant screen readers couldn’t interact with the text inside them. This posed a significant barrier for users relying on assistive technology.

With the introduction of built-in Optical Character Recognition (OCR), Chrome can now automatically recognize the text in scanned PDFs. This means you can highlight, copy, and search for text just like you would on any other web page. Screen readers can also read the text aloud without mistakenly identifying the content as images.

This update dramatically improves the usability of scanned documents for people who use screen readers, opening up access to a vast amount of previously inaccessible content.

🖱️ Chromebook Accessibility: Touchpad Off, Flash Notifications, and More

Chromebook users have several new accessibility features designed to improve comfort and usability. One much-requested feature is the ability to turn off the touchpad. This is especially helpful for users who rely on screen readers and want to avoid accidental clicks caused by unintended touchpad interactions.

Another new feature is Flash Notifications. Whenever a new notification arrives, the screen flashes to alert the user. This is a fantastic addition for people who are hard of hearing or who use screen magnification and might miss audio cues or subtle visual notifications when zoomed in.

Additionally, several features cater to users with limited dexterity or tremors:

  • Bounce Keys: This feature ignores repeated keystrokes within a short time interval, preventing accidental multiple inputs.
  • Slow Keys: Requires keys to be held down for a set amount of time before the keystroke is registered, reducing unintended characters.
  • Mouse Keys: Allows users to control the mouse pointer using the keyboard’s numeric keypad, useful for those who experience pain or difficulty using a traditional mouse.

These thoughtful features make Chromebook a more accessible and comfortable device for a wider range of users.

📅 Workspace Update: Embedding Interactive, Accessible Google Calendars

Finally, a quick but significant update from Google Workspace. Users can now embed interactive Google calendars directly into websites. These embedded calendars are designed with accessibility in mind:

  • They are fully compatible with screen readers, ensuring users with visual impairments can access calendar information easily.
  • Improved spacing enhances readability, making the text easier on the eyes.
  • The layout is responsive, adapting smoothly to different screen sizes, from desktops to mobile devices.
  • Keyboard shortcuts allow quick and efficient navigation through the calendar, supporting users who rely on keyboard interaction.

Embedding accessible calendars on websites can benefit organizations, educators, event planners, and anyone looking to share scheduling information inclusively.

🌟 Looking Ahead: The Future of Accessibility at Google

These updates highlight Google’s ongoing commitment to building technology that empowers everyone, regardless of ability. From AI-driven sign language translation to enhanced screen readers and accessible digital workspaces, the future looks bright.

Accessibility is not a feature but a fundamental part of how technology should work. By opening up models like SignGemma to the community, collaborating with users in research projects like Project Astra, and continuously improving existing tools, we’re moving toward a more inclusive digital world.

If you want to stay updated on these developments and more, I encourage you to sign up for Google’s accessibility newsletter at https://g.co/a11y/news. You can also explore the full playlist of accessibility updates at https://g.co/a11y/playlist.

Thank you for joining me in this deep dive into what’s new in Google Accessibility. Together, we can build a future where technology truly serves everyone.