Closer

Apple Vision Pro AI-powered voice communicator
In short
We have developed an immersive, AI-powered Apple Vision Pro voice communicator app that is tailor-made for the spatial user interface of Apple visionOS.
What we did

Concept

Design

Development

Client
Vision by LePB
Finished on
August 28, 2023
Closer

Our aspiration was to forge an intensely personal and intimate connection between the sender and the receiver. Here's how we brought this vision to life:

Received messages, while fundamentally recorded audio, undergo analysis by AI and are transcribed into text. Subsequently, the AI generates a spherical image that encapsulates the message's content.

This visual representation is then transformed into a three-dimensional bubble, enabling the recipient to gain an immediate 'sense' of the message's essence.

visionOS Immersion mode

When a message is played back, we immerse the user in an AI-generated representation of the message's content. For instance, below Agnes is sharing a dream she had, vividly describing her experience of walking through a bustling street in India. The other image shows isImmersion mode for another message—this time from Mike—where he notifies his intention to visit for a dinner with his wife.

Attention to detail

While implementing Closer, we dedicated significant attention to micro-animations, enriching the user experience.

Spatial app icon

We devised a spatial icon in accordance with the visionOS guidelines, causing it to emerge when the user gazes upon it.

The icon's visuals encompass several concepts. The white shapes can be interpreted as the letters 'C', aligning with the app's name, 'Closer.' They are also mirrored, symbolizing two users situated on opposite sides of the app. When combined, the shape at the intersection of the 'C's resembles an eye. Lastly, the 'C' shapes signify the immersion mode, encircling the user in a span of 180 or even 360 degrees.

Technology

We developed the application using Xcode Beta, Swift, and SwiftUI.

We designed and animated 3D components, such as the bubble representation of the message, using custom shaders and shader graphs in Reality Composer Pro.

On the backend, we established an auto-scaling infrastructure on the Google Cloud Platform, incorporating a Google Cloud Run service to host our backend API. Whenever a voice message was sent, it was routed through this backend to undergo transcription into text using Google Cloud Speech to Text. Subsequently, we utilized OpenAI's ChatGPT API to analyze the message's semantic content and distill key concepts articulated by the user. We then tasked ChatGPT with transforming these concepts into descriptive scenes. Lastly, we fed these descriptions into Blockade Labs' AI-powered Skybox generator to produce immersive 360-degree scenes.

Other success stories

Questions are fine. We're here to answer them.

We prepared some most frequently asked questions and answers for you. If there is anything else that you would like to ask, don't hesitate to reach out.

"At Le Polish Bureau I promote an open and honest conversation culture, where asking questions and saying "I don't know" is fine. There is lots of things I don't know myself. And I'm open about this with our clients. But I don't think there are many companies better than us in finding the answers."

- Maciej Zasada, Managing Partner

How are you developing for Apple Vision Pro already today?

We have access to Apple's Beta tools, including the Apple Vision Pro simulator, which allows us to build for this platform already today.

It is a new platform. What relevant experience do you have?

We have been working in the XR (AR, VR, MR) space for around a decade now. We developed an immersive VR project for Nissan in 2014, using prototype Oculus headset, that hasn't been publicly released yet. We designed, developed, and released several commercial XR projects, including large-scale event venue augmentations for Niantic (Pokemon Go's creators), or a WebXR AI-powered cross-platform experience, Meet Wol, also for Niantic. The decade of working with spatial apps and interfaces puts us in a natural and comfortable position to be working with Apple Vision Pro now.

Do you have access to the Apple Vision Pro device?

We have access to the official Apple Vision Pro simulator, which precisely emulates the behavior of the real device, and allows testing all the applications we are developing now. We are also signed up for the Apple Vision Pro development kit, Apple workshops, and Apple and Unity beta tools access.

What kind of applications do you develop?

Ones that make most sense for your brand. After we get to know your company, we will collectively brainstorm and discuss what applications will make you reach your organisational goals. We have the expertise needed to develop all kinds of Apple Vision Pro apps - UI-based, Mixed Reality, fully immersive, or web based.

Will my website work on Apple Vision Pro?

Existing websites will generally continue working on Apple Vision Pro. But there is a whole new range of possibilities to be seized on the web for the headset. Websites can add 3D content and spatial interactivity that will become available on Apple Vision Pro, without impacting the experience on any of the other platforms - desktop, tablet, and mobile. We can also help you proactively test your website now to ensure that it will be fully operational on Apple Vision Pro when the device is released.

How much does it cost to work with you?

This only depends on the scope. The initial consulting call is free, so the best is simply to reach out. Actual design and development costs will vary depending on the chosen direction.

Get in touch to discuss the possibility of launching your brand on Apple Vision Pro today.

vision@lepolishbureau.com