At the Augmented World Expo on Tuesday, Snap teased an early version of an on-device real-time photo publishing model that can generate live augmented reality experiences. The company also unveiled generative AI tools for augmented reality creators.
The model is small enough to run on a smartphone and fast enough to replay frames in real time, guided by a text prompt, Snap co-founder and CTO Bobby Murphy said on stage.
Murphy said that while the emergence of AI-generated image diffusion models has been exciting, these models need to be much faster to be impactful in augmented reality, which is why her teams are working on accelerating machine learning models.
Snapchat users will start seeing Lenses with this production model in the coming months, and Snap plans to offer them to creators by the end of the year.
“This real-time, future-proofing of device-generated machine learning models speaks to an exciting new direction for AR, and gives us the space to completely reconsider how we envision delivering and creating AR experiences,” Murphy said.
Murphy also announced that Lens Studio 5.0 will launch today to developers who will have access to new general AI tools that will help them create AR effects much faster than is currently possible, saving them weeks and even months.
AR creators can create selfie lenses by creating highly realistic facial effects using machine learning. Additionally, they can create custom stylization effects that apply a realistic transformation to the user’s face, body, and surroundings in real time. Creators can also create a 3D asset in minutes and insert it into their lenses.
Additionally, AR creators can create characters such as aliens or wizards using text or an image using the company’s Face Mesh technology. They can also create face masks, textures and textures in minutes.
The latest version of Lens Studio also includes an AI assistant that can answer questions that AR creators might ask.