SHARE
Facebook X Pinterest WhatsApp

DeepMind Can Now Generate Video from an Image

thumbnail
DeepMind Can Now Generate Video from an Image

Abstract perspective cubic space available for background

The technology could speed development, research, and virtual environment rendering – potentially shortening development and reducing the cost to deploy such virtual environment tools.

Nov 2, 2022

DeepMind has gained a new ability – generating a video of up to 30 seconds from a single image input. The new feature known as Transframer could offer developers a way to render generated videos much faster than traditional methods and reduce the obstacles developers face when developing 3D environments.

Essentially, DeepMind can now analyze key points in an image and functionally predict how it would move in a 3D video environment. Without explicit geometric information, Transframer can build a coherent video of 30 seconds by identifying the picture’s framing or contextual clues that provide markers for how an image might look if a human were to change the angle or move through the space.

A potential immediate application would be video game environments created through the predictive power of image analysis rather than the time-consuming rendering that game artists and developers use now. In other industries, this could speed development, research, and virtual environment rendering – potentially shortening development and reducing the cost to deploy such virtual environment tools.

See also: MIT, Toyota Share Self-Driving Video Dataset

Imagining an image from different perspectives

The new model shows early promise in benchmarking and has many developers excited about the possibilities. All industries should be excited, however, because this tech has the potential to reduce the obstacles in the way of developing more in the AR/VR world. DeepMind’s developers already imagine advancements in science and other industry research, and the new capability will offer even more functions as the team improves and continues to benchmark.

The proposed model also yielded promising results on eight other tasks – semantic segmentation and image classification, among several others. Google will continue to push the boundaries of what its machines can accomplish. The recently published paper outlining the video generation accomplishment, along with other commentary, is available to read.

EW

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Recommended for you...

The Rise of Autonomous BI: How AI Agents Are Transforming Data Discovery and Analysis
Why the Next Evolution in the C-Suite Is a Chief Data, Analytics, and AI Officer
Digital Twins in 2026: From Digital Replicas to Intelligent, AI-Driven Systems
Real-time Analytics News for the Week Ending December 27

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.