Hello everyone.
In this issue, we’ll talk about the behemoth in the room, Android XR, show you our latest video news compilation, introduce you to Eva Vermeulen in our creator spotlight and, more tutorial videos from PolygonFlow.
Remember, if you want access to the parsach platform you need to request an invite on Discord, TikTok, Instagram, or YouTube because we’re currently invite-only.
Now, let’s get into the news 📰
Google's big move to XR
There’s no bigger news this week than Google's groundbreaking announcement in the XR space with its new Android XR platform. As we’ve been reporting in previous editions, this pivotal development promises to reshape the landscape of augmented and virtual reality experiences and signals a major strategic move into immersive computing technologies. With that in mind, let’s break down some insights.
Strategic Partnerships
The Android XR platform will debut on Samsung's Project Moohan device, marking a significant collaboration between the two tech giants. This partnership aims to accelerate XR adoption and create a robust ecosystem for immersive computing. Giving them robust technical capabilities with:
Unified XR Ecosystem: A comprehensive platform supporting both augmented and virtual reality experiences
Advanced AI Integration: Generative AI capabilities embedded directly into the XR environment
Optimized Performance: Designed to deliver seamless, high-fidelity immersive experiences
Developer-Friendly: Comprehensive tools and frameworks for XR application development
Industry Implications
The launch of the Android XR platform represents more than just a new product—it's a strategic vision for the company's future of digital interaction. This allows Google to leverage its extensive Android ecosystem and position itself at the forefront of the XR revolution.
Google's Android XR platform represents a transformative leap in immersive technologies, democratizing XR across multiple domains including gaming, education, and professional collaboration. By introducing a more integrated and cohesive platform, they provide an approach that ensures cross-device compatibility and creates opportunities for developers and content creators.
This strategic initiative promises to break down barriers in digital interaction, offering a unified ecosystem that adapts intelligently to user needs and pushes the boundaries of augmented and virtual reality experiences.
Market Outlook
Analysts suggest this move could significantly accelerate XR adoption, potentially transforming how we interact with digital content and virtual environments. Combining Google's software expertise and Samsung's hardware innovation creates a powerful ecosystem for immersive technologies.
Google's Android XR platform is poised to capture a massive $300-500 billion market opportunity by 2030. Through a unified, developer-friendly ecosystem, it strategically targets the enterprise, consumer entertainment, education, healthcare, and retail sectors.
In other news
Behemoth, Trombone Champ, Home Sports, Beat Saber and more!
This week we’ll meet the artist that bridges the gap between creativity and technology, Eva Vermeulen.
Eva takes us on an exciting journey through her creative process, where she turns AI-generated visuals into jaw-dropping 3D worlds using innovative techniques like Gaussian Splatting.
Let’s meet her.
Q: Can you tell us a bit about your professional journey and some of the projects you’ve worked on?
A: I’ve always been passionate about creating, a love that started at a very young age. After high school, I studied Product Design at Design Academy Eindhoven, where I built a strong foundation in color, composition, concept development, and material exploration. Following my studies I transitioned into the theater industry, working on props, sets, and costumes for various productions, and later I moved to Sydney, Australia, to deepen my digital skills by studying Game Art and Animation at the Academy of Interactive Entertainment.
During this time I worked on VR game projects like Wyrmwood VR and Nekrotronic, as well as VR films such as Awake: Episode One and the Arnhem Land VR Cave, designed for museum installations. Returning to Amsterdam I shifted to working with digital creative agencies, contributing to WebGL-based experiences for projects like Making Maisel Marvelous, Tomorrowland, the World Expo Dubai, and websites for brands like Panasonic, Adobe, and Black Panther. Each step in my journey has allowed me to combine my creativity and technical skills across different mediums.
Q: How did you go from creating 2D generative art in tools like Midjourney to turning it into full 3D environments in Unreal Engine?
A: Since the emergence of AI-generated art I’ve been exploring ways to transform these stunning 2D images into immersive, interactive 3D worlds. I’ve experimented with techniques like projection mapping and using depth maps to extract 3D elements from 2D images. The recent experiment with Gaussian splatting is another step in that exploration.
When I saw the new camera controls feature in Runway I saw an opportunity to test how it could be used for this purpose. With the ability to create rotational renders of AI images, I could effectively 'photoscan' the images and turn them into 3D data. While other tools also offer rotational video rendering, I chose to work with Runway for this project. Once I had the rotational render I fed it into Gaussian splatting tools like Post Shot, which did a great job generating a 3D splat from the 2D image. From there, I took the 3D Gaussian splat and brought it into Unreal Engine where I used Luma AI to relight the scene. By utilizing Unreal’s sky system, fog, post-processing effects, and Megascans assets, I was able to seamlessly integrate the splat into a fully realized real-time environment.
Q: We saw you used Gaussian Splatting in your recent work. How does that help take AI visuals and turn them into 3D spaces?
A: In my recent work I used Gaussian splatting to transform AI visuals into 3D spaces. This technique is particularly useful for background elements where interaction isn't needed. It allows me to create atmospheric environments efficiently, with less computational and memory overhead compared to traditional 3D meshes. Gaussian splatting is great for distant elements, providing a lightweight representation, while closer, interactive parts can be modeled with traditional geometry. This approach balances performance and visual quality. If you have the right input, like a rotational video, creating a Gaussian splat is quick and straightforward, making it a fast workflow for prototyping. Overall, it's a powerful tool for crafting immersive 3D spaces efficiently.
Q: With tools like RunwayAI improving every day, what do you think is next for combining AI and 3D art? How do you see it changing the way we create?
A: I’ve been closely following developments in AI tools that generate 3D objects from images and text prompts, such as Comfy UI, Meshy, and others. So far I haven’t been wildly impressed with the results - the fidelity of meshes and textures still feels lacking, and there's plenty of room for improvement. However these tools could serve as a solid base for someone with 3D skills, allowing them to sculpt and texture assets to a higher quality. I do believe these AI 3D tools will improve over time, especially given how much they’ve advanced in just the past year. Last week, I saw some intriguing examples of AI-generated interactive 3D worlds from platforms like Wordlabs and Genie 2, which I’m excited to test further. For me interactivity is key - while an image or video can be striking, adding interactivity to a scene really brings it to life. I think these tools will continue to evolve and, for us as creators, that means staying on top of the latest tech and using it to build more in less time. Efficiency will be at the heart of how we create in the future.
One last thing
Continuing our collaboration with Polygonflow we’re showcasing a new video to see what Dash is capable of. It covers some of the biggest tools in Dash and explains how they can be used to streamline environment creation, and how users can, faster than ever before, build a UE5 environment from scratch.
So go ahead and check out their Introducing Dash 1.8 video if you want to learn more.
Until next week, and don’t forget to connect with us in Discord and keep up with the news on TikTok, Instagram, YouTube.
Thanks for the support 🫶
The parsach team