AI Can Now Taste and Feel and It’s Freaking People Out
Artificial intelligence has entered a new era of capability that’s reshaping our understanding of what machines can do. From transforming your personal photos into cinematic videos to simulating atomic interactions with unprecedented speed, AI is no longer confined to simple tasks like chatbots or static image recognition. The pace of innovation is accelerating so rapidly that it almost feels like AI has shifted into overdrive. In this article, inspired by the latest insights from AI Revolution, I’ll take you through some of the most groundbreaking advancements happening right now—from Google’s AI-powered Google Photos overhaul to Meta’s massive atomic-level simulations, and even a graphene-based AI tongue that can taste flavors like a human. Plus, we’ll peek behind the curtain at Mark Zuckerberg’s colossal AI supercomputers that might just power the future.
Let’s dive into these fascinating developments and explore what they mean for the future of AI and our everyday lives.
📸 Google Photos Gets a Major AI Upgrade: Snapshots Turn Into Full Videos
Google Photos has long been a favorite tool for organizing and reliving memories, but if you’ve ever tried creating collages or animations within the app, you might have noticed it can feel a little clunky—like hunting for hidden levels in an old video game. Well, Google is about to change that with a significant overhaul that puts AI front and center in how we interact with our photos.
At the heart of this upgrade is a new full-screen “create” panel that consolidates all those scattered creative tools into one easy-to-access place. Although this feature was spotted in version 7.36 for Android, it’s not widely available yet, but the rollout seems imminent—likely within weeks once Google ensures the new AI features won’t overwhelm users’ storage.
When this panel goes live, users will enjoy one-tap options for collages, quick animations, and cinematic photos that add fake depth and motion to still images. More excitingly, Google is testing two AI-first features: photo to video and remix. These aren’t your average filters or effects. Instead, they use Google’s advanced image models to generate full video sequences from your photos or creatively reshuffle your media, incorporating style cues that give your memories a fresh, dynamic look.
Interestingly, the album builder tool hasn’t yet been integrated into this new panel, so for now, you’d still need to build albums the old-fashioned way. But this is just the beginning of what Google envisions as a more immersive, AI-driven experience where your photos don’t just sit still—they come to life.
This change reflects a broader trend where AI is enhancing everyday consumer tech by making it more intuitive and expressive. Imagine turning your holiday photos into mini-movies with just a tap, or remixing your gallery to tell new stories without lifting a finger. The potential for creativity and ease of use here is immense.
⚙️ DeepMind’s Gen AI Processors: A Toolkit for Real-Time AI Systems
While Google Photos focuses on consumer-facing AI, DeepMind is pushing the envelope on the developer side with a powerful new toolkit called Gen AI Processors. DeepMind recently open-sourced this under an Apache 2.0 license, meaning anyone can download, build on, and innovate with it.
The core idea behind Gen AI Processors is to streamline how data flows through AI pipelines in real-time. Whether it’s text, sound, images, or even JSON bits, each piece of data is wrapped into a tiny “processor part” package that moves smoothly through a single, orderly line powered by Python’s Asyncio. This design allows every step in the pipeline to begin processing as soon as the first package arrives, significantly reducing the time it takes to get the first output—what engineers call the time to first token.
This approach is particularly exciting because Gen AI Processors integrates seamlessly with Google’s Gemini AI models. It supports both the standard request-reply interface and Gemini’s live streaming feed, which enables the model to start generating answers even while a user is still typing. This real-time responsiveness could revolutionize how interactive AI systems function, making conversations and data processing feel instant and natural.
To help developers get started, DeepMind includes three demonstration notebooks:
- Transforming raw match data into live sports commentary
- Gathering web information to produce quick summaries
- Listening to microphone input and responding out loud
These demos showcase the versatility of the toolkit and how it can be applied across different domains. It’s important to note that while Google Gen AI client and Vertex AI handle the heavy lifting of running the models, Gen AI Processors focus on orchestrating the flow of information, optimizing how data moves to and from those models.
Compared to other orchestration libraries like LangChain or NVIDIA’s NEEMO—which were primarily designed for single-direction text chains or large neural graphs—Gen AI Processors was built from day one to support two-way real-time streams. This specialized focus opens new possibilities for interactive AI applications that need to process and respond to data dynamically and immediately.
The community around Gen AI Processors is already active, contributing extras such as content filtering and PDF slicing capabilities. This rapid expansion of features means the toolkit will only become more powerful and adaptable over time.
🧪 Meta’s UMA: Revolutionizing Chemistry with Universal Atomic Models
Switching gears to scientific research, Meta’s FAIRLab is making waves with a groundbreaking AI model family called UMA (Universal Models for Atoms). This initiative is transforming how researchers simulate chemistry and materials science by overcoming the limitations of traditional methods.
Historically, density functional theory (DFT) has been the gold standard for simulating atomic interactions. DFT is highly precise but computationally expensive—the time it takes to run simulations increases dramatically as the number of atoms grows. This bottleneck makes large-scale studies slow, limiting researchers’ ability to explore complex materials quickly.
UMA offers a more modern, scalable alternative. By training a massive neural network on a dataset of approximately 500 million atomic structures, UMA learns to predict atomic movements almost as accurately as DFT but in a fraction of the time. The model architecture builds on a graph design known as e-Set, enhanced with extra inputs that account for total electric charge, magnetic spin, and other parameters traditionally dialed into DFT simulations.
The training process unfolds in two stages:
- The model first learns to predict atomic forces quickly.
- It is then fine-tuned to ensure its predictions conserve energy, a fundamental physical law.
Even the smallest public version, UMAS, can simulate about 1,000 atoms at roughly 16 simulation steps per second on a single 80GB GPU. Impressively, UMA continues to work well with test cases containing up to 100,000 atoms, showcasing its scalability.
When plotting computational effort versus error, the team found a neat linear pattern: increasing model capacity steadily improves accuracy. UMA also employs a “mixture of experts” approach, using multiple specialized subnetworks within the larger model. Increasing the number of experts from one to eight sharply reduced errors, while adding more than 32 experts yielded only modest gains. Most published versions of UMA sit near this sweet spot for balancing performance and efficiency.
On widely recognized benchmarks such as AdzorbML and MatBench Discovery, UMA matches or outperforms models designed for specific tasks, demonstrating its versatility.
That said, UMA isn’t without limitations. It struggles with atomic interactions extending beyond six angstroms, and its fixed charge and spin categories mean it can’t yet handle values outside its training data. Meta’s roadmap includes plans to address these by enabling flexible interaction ranges and continuous charge embeddings, pushing toward a truly universal atomic model.
This progress could have massive implications for chemistry, materials science, and even drug discovery, where faster and more accurate simulations can accelerate innovation.
👅 The Graphene AI Tongue: Machines That Can Taste
One of the most fascinating breakthroughs I’ve come across recently is the development of a graphene-based AI tongue—an artificial taste sensor that can detect flavors with nearly human-level accuracy. This innovation bridges the gap between machine sensing and human perception in a way that feels almost like science fiction.
The hardware is a marvel of nanotechnology. Researchers layered sheets of graphene oxide—carbon lattices just one atom thick, dotted with oxygen groups—inside a tiny nanofluidic channel. This channel guides a minuscule stream of liquid across the graphene stack.
When molecules in the liquid interact with the graphene, they alter its electrical conductivity in unique ways—much like how pressing piano keys causes specific strings to vibrate. The resulting conductivity patterns serve as “signatures” for different flavors.
To translate these patterns into recognizable tastes, the team ran 160 different reference chemicals through the device, covering the classic sweet, salty, bitter, and sour spectrum. The data from these tests fed into a machine learning model that learned to identify flavor profiles. Importantly, the training also included complex mixtures like dark roast coffee concentrates and cola syrups, enabling the AI tongue to recognize blended flavor “chords” rather than just single-note molecules.
A key innovation here is that both the sensing layer and the tiny computer interpreting the signals live on the same chip. This integrated design eliminates the delay between detection and classification that plagued older electronic tongues, slashing latency and improving real-time performance.
Currently, the graphene tongue is a lab bench prototype—bulky and power-hungry compared to what a mobile device could handle. The next engineering challenge is miniaturization and power reduction, essential steps toward practical applications.
The potential use cases are exciting and varied:
- Quick taste loss screening for patients recovering from strokes or viral infections
- Food safety checks that detect spoilage before products reach shelves
- Robot kitchen assistants that adjust seasoning on the fly
The findings were published in the prestigious Proceedings of the National Academy of Sciences, underscoring their scientific credibility. The researchers emphasize, however, that a truly universal flavor sensor will require training on thousands of compounds—far beyond the current 160—before it can rival human taste comprehensively.
Still, this graphene AI tongue is a promising proof of concept that pushes machine sensing closer to human sensory experiences. It’s a glimpse into a future where AI could help us understand and manipulate flavor in ways never before possible.
⚡ Zuckerberg’s AI Supercomputers: Power Plants for the Age of AI
While researchers are pushing the boundaries of AI’s capabilities, Meta is doubling down on the raw compute power necessary to sustain these breakthroughs. Mark Zuckerberg recently revealed that Meta’s first AI data center supercluster, code-named Prometheus, is on track to come online in 2026 with a staggering capacity exceeding one gigawatt.
To put that in perspective, one gigawatt can power roughly 750,000 homes. Meta’s ambitions don’t stop there—the next cluster, Hyperion, is projected to reach five gigawatts over several years. These colossal facilities will be dedicated almost entirely to GPU power, the workhorses of modern AI training and inference.
Zuckerberg has openly stated that Meta is prepared to spend hundreds of billions of dollars in pursuit of superintelligence. Capital expenditures for 2025 alone are expected to range between $64 billion and $72 billion, highlighting the scale of investment needed to compete at the highest levels of AI research.
To complement this massive infrastructure buildout, Meta is aggressively recruiting top talent. Reports indicate Zuckerberg has dangled a $200 million package at a leading generative AI expert from Apple and has already secured high-profile executives such as former GitHub CEO Nat Friedman and former Scale AI CEO Alexandra Wang.
Despite some internal dissatisfaction over the release of Llama 4—which reportedly didn’t represent a significant leap over Llama 3—Meta’s supercluster strategy aims to close that gap by providing the computational horsepower to train ever-larger and more sophisticated models.
Investor confidence remains strong, with Meta’s stock climbing roughly 25% year to date and closing just under $721 recently. This financial backing and strategic focus suggest Meta is positioning itself to be a dominant force in the next generation of AI.
🤔 What’s Next? The Future of AI Beyond Imagination
As I reflect on these developments—from Google’s AI-enhanced photos and DeepMind’s real-time AI toolkits to Meta’s atomic simulations and graphene AI tongues—it’s clear that AI is no longer limited to the digital realm. It’s starting to taste, feel, and simulate the physical world with ever-increasing fidelity.
Mark Zuckerberg’s monumental investments in AI supercomputing infrastructure underscore how seriously the tech giants are taking this revolution. The race is no longer just about making chatbots smarter or automating mundane tasks. It’s about creating machines that can sense, understand, and interact with the world in ways that rival or even surpass human capabilities.
So, how long before AI starts doing things we haven’t even imagined yet? That question feels both thrilling and a little unnerving. The pace of change suggests it won’t be long before we see applications that blur the lines between science fiction and reality.
What do you think? Are we on the cusp of an AI revolution that will redefine human experience, or are there limits technology won’t cross? I’d love to hear your thoughts.
If you enjoyed this deep dive into the latest AI breakthroughs, don’t forget to explore more from AI Revolution and stay tuned for what’s next. The future is happening faster than ever, and it’s an incredible journey to witness.