True Cross Platform, Immersive, Open Source Connected Spaces

Magnopus has released an open-source, cross-reality, cross-platform interoperability framework for shared spaces and experiences.  This allows developers to build applications that work universally for different users – all in the same virtual space, regardless of their device and whether they’re in the physical world or the digital world. Magnopus is an award-winning company based in Los Angeles, and they are at the forefront of revolutionizing our industry through groundbreaking spatial experiences. With its unique blend of cutting-edge technology, Oscar-winning talent, and creative storytelling, Magnopus is redefining the concept of immersive entertainment; forget the Metaverse, this is all about shared connected spaces.

Their ‘Connected Spaces‘ project is truly cross-platform and not just cross-device.  It is not just that Connected Spaces Platform runs on everything from mobile Phones and AR headsets – more significantly, it runs across Unreal, Unity, Chrome browsers, (running in WebGL), Apple (ARKit), etc.  It allows people to not only view an experience in a variety of devices and formats but also critically, it allows people to develop assets on a huge variety of platforms – collaboratively and live – all at once – simultaneously – together.

The range of apps, plugins, cloud services and components that run on top of the Connected Spaces Platform allow developer to now build applications that are really ‘sticky’ engaging experiences.

 

 So what is it?

In many respects, it is a foundational building block for the spatial internet, allowing developers to build an app once and easily deploy it across different platforms. We sat down with the system in LA recently at Magnopus offices and asked Ben Grossmann to describe it in terms of current technology. “If I were to compare it to things that people already know – if you’re in the movie business, you know CineSync and RV, which let you share videos and annotate on them with tools and messages from users. So it’s not unlike Zoom meets CineSync or RV but with Miro, – except those are 2D and this is all in 3D.”  He further adds: “and that 3D can be anywhere between a completely virtual world that has no parallel in the physical world, or it can be anchored to a very specific place in the physical world, with a digital copy accurate down to the centimeter.”

Authors can make fantastical imaginary worlds that users can experience and edit in real time or they can have an exact digital replica of a real place. You could build a fictional world or bring digital characters into your actual location and they would appear to be integrated into your space, (and not just overlaid or slapped on top). Connected Spaces can work in a game engine console, but it can also work on a browser without the need to download an App. Connected Spaces is designed to be ubiquitous and allow a new three-dimensional web or interactive experience, where a user can go from being a passive observer to being an active participant.

“The narrative goes like this: the web today is a series of 2D connected pages. The web tomorrow is a series of connected 3D spaces.”

– Ben Grossmann, Co-founder Magnopus

Background

Magnopus was founded in 2013 by a group of M&E creatives with diverse backgrounds in film, games, and technology. Their shared passion for pushing boundaries and creating innovative experiences led them to establish a company dedicated to exploring the intersection of storytelling and technology. We have spoken to co-founder Ben Grossmann several times here at fxguide, discussing their work on The Lion King, Jungle Book, and Hugo and also extensively about their other work in VR and AR.

Magnopus is made up of both Oscar-winning visual effects artists and VR/AR specialists. The company has made such landmark VR projects as CocoVR with Disney Pixar, the Mission: ISS in collaboration with NASA, which allows users to explore the International Space Station, and the Disney Remembering project where Magnopus made AR experiences that come ‘out of the TV screen’ into people’s living rooms, in sync with the traditional video media on Disney+. Magnopus specializes in creating high-concept interactive experiences that challenge traditional storytelling norms. Their multidisciplinary team of artists, engineers, and designers collaborate closely to bring their visions to life. By seamlessly integrating technology with narrative, they have redefined the possibilities of interactive storytelling.

This led to the immersive location-based experience of building a cross-reality, social digital twin of the World Expo in Dubai. Magnopus continued after that project finished to push the boundaries of what is possible in the realm of experiential environments and developed a platform ‘Connected Spaces’ that they have just made open source.

Digital Twins

The concept of digital twins refers to virtual replicas of physical objects or systems made with the power of advanced CG and AI. Digital twins provide a dynamic and immersive way to understand, simulate, and interact with real-world objects or systems. As its name suggests, a digital twin is a digital copy or replica of an object, system, or place that can be continuously updated with data from its physical counterpart.

NVIDIA is another company that has focused on the creation of highly realistic and interactive digital twins. By combining accurate visual representation with real-time data, NVIDIA’s digital twins can simulate the behavior, performance, and characteristics of their physical counterparts. This concept has significant implications across various industries. For example, in manufacturing, digital twins can be used to optimize production processes, monitor equipment performance, and predict maintenance needs. In architecture and construction, digital twins can help visualize and simulate building designs, enabling better planning and resource management.

NVIDIA’s digital twins primarily serve industries such as manufacturing, architecture, and healthcare. They enable the simulation and analysis of real-world objects or systems, providing insights for optimization, planning, and decision-making.  On the other hand, Magnopus crafts high-concept interactive experiences that transport audiences into digital worlds, blurring the lines between the real world and expanded interactive storytelling.

Magnopus focuses on creating unforgettable, emotionally immersive experiences that challenge traditional storytelling norms. While the more industrial NVIDIA digital twins aim to replicate physical reality for practical purposes, Magnopus’ immersive experiences aim to create unique entertainment and storytelling encounters. Both NVIDIA’s digital twins and Magnopus’ interactive experiences showcase the power of cross-platform technology but in different ways. In many respects, Magnopus is designed for a wider audience of both users and creators.

 

World Expo: Connected Spaces ‘Alpha’ version

Magnopus was awarded the task of creating a digital twin / digital experience for the World Expo in Dubai. This presented a unique opportunity to engage with captivating narratives and explore mesmerizing environments that both duplicated and extended the experience of being at the actual World Expo. Originally, it was planned that through the use of virtual reality and augmented reality, attendees would be able to immerse themselves in a new realm of storytelling that blurred the lines between fiction and reality. Although this shifted in focus and implementation due to COVID-19, it was still a vast undertaking. In the end, it became a global event, digitally it ran 24 hours a day. The physical Expo had around 300,000 physical users going through the actual site daily. Additionally, there were over 17 million virtual visits to the Digital Twin Expo. Key to the digital working was mapping the digital world with incredible accuracy to the real Expo site.

Reconciliation between the physical world and the virtual world is hard, especially if you want to interact with other people.  For example, imagine that there is a virtual copy of a room that the user is in at a site like Expo. For the person in that room, you can add digital elements and interaction overlayed on the actual room, mapped live to be linked or attached to the room.  “But if one of our friends wants to join us from London, then they need to see us in a digital copy of this room,”  explains Ben Grossmann. “If we are local, we can actually see the living room. But our visitor from London needs to see a digital copy of the living room and the digital objects we’ve placed in it.”  But the problem is much more complex than just this. The London friend also needs to see the other users digitally in the exact location that corresponds to their location in the physical world. To do this, there has to be a reconciling of the physical world and the digital world with very low friction.  You need a high degree of accuracy down to the centimeter level so that it seems like both people are making eye contact.  “If you and I are having a conversation, eye contact is important,” Ben adds. “If their avatar was always looking above or over your shoulder, that would be weird.”

 

To solve this Magnopus came up with a cascading set of solutions.  First, they use GPS and the compass orientation on the main user’s phone to figure out approximately where they are in the world and which way they are facing. But this is too crude for engaging in interactions or making quality eye contact.

At Expo, in Dubai, the system started with GPS and then that would estimate plus or minus 10 meters, depending on buildings, and people’s locations, but alone, this could sometimes be out by as much as 50 meters. Then it would move on Grossmann explains,  looking at sensors and the spectrum around the user. This could be Wi-Fi triangulation, or it could be ultra-wideband positions or beacons, it depended on what kind of environment the user was in. “Now that we’ve culled you down to inside plus or minus 10 meters, we’d use attendees’ phone cameras and use a visual positioning system.”  In Dubai’s particular case, the team used Google Cloud Anchors,  but there are several other similar systems available. The visual positioning system looks via the camera and creates a sparse point cloud, which is like a 3D fingerprint of the environment. As the user moves around their phone, it compares the geometry of the space to a small catalog of very sparse point clouds to identify a 3D fingerprint and resolve a match. The system also offers the option of using an image target, such as QR codes. “A lot of people use QR codes,” comments Ben, “but they’re very ugly and we don’t really need to use them”.

The system can now place objects and people in a very accurate space and have people look and interact with each other – even if they are on separate continents. It was so accurate it could resolve an entire World Expo site down to the centimeter. “And that’s more than enough level of precision for eye contact,” he explains. “So now when you’ve got people in the physical world and people in the digital world and everybody’s moving around able to do stuff like handing each other objects.” This is how the Connected Spaces approach can reconcile the physical and the digital for a cross-reality bridge.

On top of that, the server you are using in Dubai would be too slow on latency if it had also to serve London, so behind the scenes, a global collection of regional servers are spun up and maintain sync with each other. “You need all these systems to work together with really low friction,” he concludes. This is where the necessity of Edge computing and 5G  networks becomes critical.  These kinds of high data low latency experiences really benefit from an architecture that can take into consideration the topology of the network when optimizing the routes and even user experiences between users depending on where they are. “It’s also critical to managing costs and the environmental impact of the required compute,” comments Grossmann.  “We’re not streaming something as simple as 4K video/audio here.  We’re about to start streaming people places and things in 3D.  Eye blinks, Pupil dilation, Hand gestures, etc. – between hundreds of people in milliseconds on a network designed for sending emails and documents.”

Working at Magnopus – note the range of connected spaces from Unity on an iPad (L), Unreal Engine on desktop (R), and Chrome on an MacBook Air (C and bottom). And note the real life background of this ‘space’ is the very environment to the left of shot.

People, Places, and Things

In designing the system the team resolved that the world needed to be categorized as either

  • people,
  • places, or
  • things.

As people really care about people, the team needed to worry about how identities were represented, how people would be seen across all platforms, and with what forms of communication. They also focused on places, environments, or contexts that could have IoT devices. Noting that the environment could be completely made up or a digital twin of a real place like the World Expo. Finally, there was the issue of objects. Because an object can move from one environment to another, and with different properties, the team broke objects down and allowed them to have properties, both digital and real. For example, a light switch needs to be able to turn on a digital light in a twin but also trigger a real light coming on in the real world.

Objects must also have a natural set of properties to work in a world. They need a real-world scale, they need a sensible center of origin, a universally valid look and feel, and they need their functions and capabilities to be the same no matter which platform or device is being used. In other words, the light switch must work in Unity, Unreal, as an AR overlay and turn on the real lights in all cases.

Plus… more than just Magnopus’s team needed to be able to script and automate these functions, they wanted a wide variety of designers and artists all on different setups to also be able to easily program objects or import assets into an experience.  “So when we looked at all those different contexts, we realized this is an issue of interoperability and compatibility.,” says Grossmann. “As all these devices currently spoke different languages.”

The team started by writing interpreters for different languages to make functions work universally across all platforms. For example, the system supports an object type of  ‘video’. “Okay, well, we all know what a video is. It’s a series of still frames with compression algorithms and other clever stuff. What do people do with videos? Well, they have to play them, pause them, – videos have a timeline, they can fast forward, etc.,” he outlines. These molecular functions of the primitive object ‘video’, now need to be interpreted across Unity, Unreal, WebGL, VR, AR, whatever the Connected Spaces system supports. “Now, that we have that abstraction for all those functions around this object, repeat that process with people, static objects, animated objects, environments,” he explains.  “And – then glue them together within this interoperability framework so that when somebody presses ‘play’ on a video in say Unity, the system knows how to interpret that same function in Unreal Engine or on the Web and sync all the systems”.

COVID shift.

Magnopus is known for its cutting-edge work on VR and AR, and the Connected Spaces platform is extremely well placed for the upcoming Apple Vision Pro, but the original World Expo project got an expected refocus with COVID-19. While such things as complex VR rigs were centrally included at the outset, once COVID hit the team had to adjust to the ‘new normal’, and that meant servicing a lot of people who were going to want to just interact with the simple tech they already had at their homes.

“We had to pivot pretty hard and reprioritize what are we working on and what problem we were trying to solve,” comments Grossmann.  Before the pandemic, the team had a mandate to show what was going to be possible in the future, at the highest end, in the highest quality possible. But when the pandemic hit, they had a new problem to solve, now they needed to focus on getting as much of the content experience from the expo and making it available to people who couldn’t visit physically. “Because people put a lot of work into these educational and entertaining experiences, and now it was possible that these weren’t going to be able to be seen – since people couldn’t travel,” Ben explains.

The slogan for the World Expo “was connecting minds and creating the future”. Magnopus made the argument that the Expo would not be connecting minds if it was not bringing people together. That meant increasing the communication between the physical and digital world using a much broader and more widely available set of technology. “This was the hardest thing we could possibly have done because it hadn’t been done before,” he points out. So they took the content layer and modified it to allow it to provide the widest educational experience to the broadest audience.

Magnopus worked with the great team at Dog Studios and together they found a way to make their high-quality immersive experience a more accessible cross-platform experience. “We originally started out in the high-end thinking – ‘let’s use Magic Leap goggles. Let’s use VR headsets. Let’s use these really high-end devices’ Grossmann recalls. “And then when the pandemic hit, we realized most of the people that we need to support won’t have access to VR headsets and AR glasses”. Using an approach built around clever web skills and web 3D, they re-engineered the backend services, content management, and multiplayer systems to make a web interface to work with mobile apps or via a web interface.

After the Expo, Magnopus leaned even further into web tech and open source. They shifted their focus away from just high-end devices, although they still very much support them- and focused more on making the same content experience available to people on web browsers, and on low-powered computers. So while Connected Spaces is a natural fit for spatial computing, the same content could be experienced on a $ 3,500+ Vision Pro and a Chrome Web browser on a 2-year-old Macbook Air, and all the people, places, and objects will work the same, and interactively.

Importance of Range to Developers.

One of the real issues with developing for any new and very advanced technology like the Vision Pro is how few are in the field when you launch an application for it. It is a chicken and egg problem, People won’t buy new tech unless there are applications for it, and it is not financially viable to make an app for a product that is yet to be shipping to a mass potential audience.

The problem is made doubly worse if you can’t find staff experienced in the new tech, or you need vast resources to train up a new dedicated team. Magnopous wants to create new parallel worlds where some people have high-end devices, and others don’t, otherwise, new technologies can divide between the tech have and have-nots.  “This pushes people apart,” Grossmann points out, “which is exactly the opposite of what we’re trying to do.  We want people to come together regardless of what they have, or where they are.”

The Connected Spaces Platform directly addresses this. Someone can produce experiences using tools they know and those experiences will work on both new tech and old. People can happily experience the output you build on tech they have, but experience a more immersive version if their budget allows.

To do this Magnopus faced a huge issue of allowing a vast array of data formats to be used. For example, if I wish to add a 3D car to a scene, I may build it in the App or platform I am using, or I might just buy a model from a library. In this latter case, Connected Spaces allows the user to import any model. If it already meets the desired specs it is immediately imported to the Connected spaces world such as via. UE5 version or the Unity – whatever is active. If the format is not standard then the program spins up Blender in the cloud, converts the file, and then imports it automatically.

The consumer interface and the simplicity are is some things that we value and prioritize…The interoperability for people that don’t have mega resources. So, it works for enterprises including retail businesses, museums, informal education, science centers, or universities. Places where people don’t have access to, a $6,000 GPU ‘or  factory robotics’ or 500 computer vision cameras. 

– Ben Grossmann

How it works

As mentioned NVIDIA does very strong work in digital twin simulations and much of this work is built on USD. This is perfect for what NVIDIA needs but actually, OpenUSD is not yet a good format for lightweight real-time applications across multiple different platforms. Currently, it is not a good runtime format as it lacks good compression and streaming. It is a good format for transferring assets between packages but it comes from high-end film, not from run-time web environments. It still needs to build towards these capabilities in the future.

The Connected Spaces solution works with integrated consumer and developer Interfaces. Magnopus has implemented a layered system inspired by the OSI network model. Horizontal layers function independently at low levels and are integrated at high levels. This aims to enable users to easily create and publish digital content to customers on all platforms and devices.

In a containerized environment, they have built an architecture of cloud services that are scaled regionally across the globe in AWS. A commercial version of the full technology stack is in development.  Many characteristics of these services mirror traditional web microservices and game server architecture but have been re-designed with unique characteristics to serve the needs of digital 3D content and environments.

The system works with a flexible SDK architecture that facilitates orchestrating logic, primitives, and capabilities between and across Cloud, Edge, Fog, or Client Applications. The Connected Spaces Platform layer abstracts platform-specific capabilities into universal data structures that allow them to be transportable to different clients while remaining in sync. It also facilitates the distribution of logic and networking models to facilitate the most efficient and lowest latency interactions for users.

Uses @ Expo

For Magnopus, it’s incredibly important that people have the flexibility to author in any way and update from different ways than the way you authored. The use case examples are wide and varied, from use in Museums, public spaces, theme parks, live events, and local retail ‘Etsy’ store creators. At the other end of the spectrum, the system allows developers to build incredibly complex digital twins.

For Expo 2020 Dubai, Magnopus assembled a diverse team of over 200 engineers, designers, and artists working across seven countries and a network of companies. Together, they worked to create Expo Dubai Xplorer – a city-scale cross-reality space where on-site and remote visitors to the Expo could connect in real-time in shared experiences. Available on iOS and Android, all connected by the technology stack and cloud-hosted services written by Magnopus.

The team created a huge assortment of interactive experiences and located them throughout the digital site to engage visitors. “We designed several different types of “activations” to appeal to a wide range of users with a range of interests from fun to functional,” explains Grossmann. Each experience type could be seen in AR on the site, or via mobile from anywhere in the world and could be very precisely placed on the Expo site, within about 30cm or less.

Characters & Quests

Visitors could meet a character from history with expertise in specific subjects that would guide them to other nodes in a storyline that could be experienced non-linearly. The engagement nodes could vary from educational to entertaining, passively observational, or socially interactive and could be experienced in any order, so they could be aligned to the convenience of the visitor’s journey without forcing them to follow a certain order that might be inconvenient in the context of other activities. Visitors could even meet other characters in the same related story. This type of experience was great for connecting visitors with a series of small related experiences, revealing unifying themes across many different types of content. Also great for creating themed “tours” that could guide visitors to areas of interest along a particular subject.

Interactive Social Experiences allowed users across physical & digital to engage in fun 2-3 min experiences that might explore narratives in the area around the visitor or explain architectural elements.

All these experiences could be designed synchronously or asynchronously, depending on the content type. For example, a new visitor could see the activation start fresh when they walked up to it, or several visitors could see the same thing happening at the same time regardless of when they joined the experience. (This is often important for experiences that might sync to an event taking place in the physical world.)

Users had the opportunity to work together to construct vibrant coral colonies, placing digital polyps and sea sponges to grow the reef and uncover indigenous fish species. This Activation served to raise awareness and encourage a personal connection with the endangered coral reef ecosystems worldwide while also spotlighting the UAE’s dedication to nurturing 1.5 million new coral colonies in the Arabian Gulf by 2025.


In order to better appreciate the design influences of Al Wasl Plaza, users could collect animated peacock feathers on a journey toward the Plaza and then see them assembled into a larger-than-life peacock, revealing its stylistic imprint on the giant dome.

Portals were available in both the physical and digital worlds.
Smart Displays on the site with cameras were running custom AR apps connected to the platform so that they could act as viewports showing the digital content layer overlaid on a physical camera feed of the site, like a “magic window” revealing a layer to the “connected world.”

AR experiences could be created and placed anywhere on the site. These gave visitors a virtual portal they could step through with their phones, to be teleported to a different place or time relevant to the physical location of the portal. Inside the portals could be created with fully 3D environments, or simply placed with 360º videos and images captured anywhere in the world.

Points of Interest (POIs).
These are very simple, easy-to-author informational or educational cards that could be anchored to a specific location on the site. POIs could contain pictures, videos, and text explaining a feature of the site, or a bit of historical context. Ideal for curious visitors interested in the educational back story of the site. They could be authored by anyone at Expo in minutes via a web interface with a map. POIs were accessible on the website in real time. They were integrated with the standard visitor experience map in the mobile app, and supported 3D augmented reality wayfinding to find and navigate to POIs across the site.

Spectacle AR Experiences
These were linked to prominent Expo site features and buildings to create surprise and delight among visitors. The content was automatically streamed down to the user’s device only when they approached a location by proximity so the application’s initial install size could be tiny.

Water Dragon which had a nest near the heart of Expo.

For example, a Water Dragon made its nest at the mesmerizing water feature near the heart of Expo. It would fly around and make a grand aerial entrance to deliver an up-close encounter for attendees. Elsewhere, a larger-than-life Falcon would swoop down and transform into the UAE pavilion, revealing the design intent of the pavilion’s architects. A Rocket would launch from the Mobility Pavilion, drawing attention to the UAE’s space missions to Mars.

The Activations were all connected in a “gamification” that rewarded the visitors for engaging with the World Expo in meaningful ways. Each engagement earned visitors “Seeds of Change”, which they could pledge to real-world causes at virtual ceremony locations which would translate into actual donations made by Expo 2020 Dubai to charitable organizations. This created a positive cycle of play that rewarded users for their continuous curiosity and in turn, empowered them to help make the world a better place.

Looking forward

Magnopus has established itself as a trailblazer in the world of spatial storytelling. Their unmatched ability to fuse technology and narrative has elevated interactive experiences to new heights. As they continue to push the boundaries of what is possible, Magnopus remains at the forefront of the industry, shaping the future of entertainment while also making their work open source and accessible to everyone.