Can we pump the brakes on the metaverse talk…please?

A screenshot from Second Life of my robot avatar standing on water in the open see peering out across the horizon.

I’ve had this post titled (but unwritten) and sitting in my WordPress drafts since January of 2022.

You remember…it was the height of Metaverse fever. It was the Next Big Thing(tm) and you were a fool if you weren’t making some sort of play to build “a metaverse.” (have I mentioned I hate it when someone refers to A metaverse…there is only one, there cannot be many, it just…sigh.)

Of course the majority of Metaverse plays were secretly NFT plays as if a web of virtual worlds couldn’t possibly exist without web3 tech. Much of the hype came from the big social network with the big blue app and the guy who renamed his company because the Metaverse was inevitable. He then went on the spend billions trying to build a singular virtual world completely controlled by him. Not the Metaverse.

Meanwhile…real work is being done all over the web to build unique and interesting 3D spaces that will, hopefully, connect. I’ve gone on about this before but for the Metaverse to exist a few things have to happen.

  1. Open Avatar Standards
  2. Open and Accessible 3D worlds
  3. Self hosting

There must be an open avatar standard. We must have the ability to create a 3D representation of us that persists across worlds. You can decide if it looks like you or a giant pink raccoon but that version of you should be able to enter any 3D space without issue. ReadyPlayerMe was the closest but they were recently purchased by Netflix and their services shutdown soon (Jan 31st!). Fortunately, there’s work being done. See the Reference Canonical Skeletal Framework and the KHR Character and Avatar Extension Set.

Any world must be open and easily accessible. I was pretty excited by Mozilla Hubs because it was so easy to launch a world and get started all via the browser. Unfortunately Mozilla abandoned it BUT open sourced the technology so it lives on as the Hubs Foundation. I’m also excited by Arrival.space and what they’re doing with web based 3D spaces but also Gaussian splats all easily experienced, again, in your web browser (both on your computer/phone or in a headset). These spaces must remain accessible from a variety of devices.

A screenshot of my avatar standing in the middle of my Arrival.space.
My ReadyPlayerMe (RIP) avatar standing in my Arrival.space running in a desktop browser.

Like the 2D web any of us should be able to host our own space. Many will opt to use a hosting service or a prebuilt world that they customize but the option to build it and host it yourself from the ground up must exist to create a vibrant community of virtual worlds.

I came back to this post with the recent announcements from the guy who renamed his company and the initial feeling that we were, once again, heading for a VR/Metaverse winter. We’re not. Valve has the Steam Frame coming soon. Google, for now (I know I know), is pushing ahead with AndroidXR. Open source tools and game engines are making it easier and easier to build things once and publish to multiple platforms (until standards unite them all!) allowing for rapid experimentation. Organizations like ImmersiveX and the Virtual Worlds Museum (2D site here) are actively building and evangelizing and engaging in multiple worlds.

While I feel for all of those who built something only to have it bought up and then killed and the numerous folks laid off this is likely all better for the greater ecosystem. The giant tech company was taking up a lot of space and now, perhaps, we can grow the Metaverse we want and deserve. There’s still a long road ahead of us but the foundation is being built now.

Two Weeks of Conferences – Different but the Same

I’m in the middle of an already long week of IAAPA meetings, gatherings, and sessions in the heart of Florida Vacationland that contrasts starkly to last week’s Immersive X conference that took place entirely in virtual worlds accessed via headset or computer.

Into the Virtual

Immersive X is a 3 day gathering of talks, tours, and social activities held across different platforms for building experiences in virtual worlds. VRChat, Engage, Arrival Space, Spatial io. Each platform chosen for its strengths and set of tools that best serve the hosted session. Attendees dressed as humans wearing human things but also as raccoons, Chile peppers, tiny foxes, robots, and more! This is all expected in the virtual world and thus having a conversation with a robot about art inspired by Arabic writing is perfectly mundane (in the best sense of the word).

The sessions covered a variety of topics across art technology, humanity, and social well being. The schedule was packed tight and, as with any good conference, there was no way to attend every session. Here’s a screenshot of the sessions I planned to joined:

A screenshot of my schedule of sessions to attend at the Immersive X conference

Over the course of 3 days I was able to continue working and continue being with the family while dipping into a scheduled session for 45 minutes at a time. After the first few talks I began to recognize familiar avatars from previous talks noting who was interested in similar topics. Unfortunately I didn’t prioritize any of the social gatherings so I had little opportunity to chat with other attendees outside of the sessions.

Two of the standouts (of the talks I attended) involved building a world for an artist’s work and another for creating venues for live music and gathering.

During the Ink Never Dies session we were guided through a world built to represent ancient Arabia. As you walk through the world golden Arabic characters appear before you as the artist’s voice fills the air to speak about their art and the influence of Arabic culture.

A screenshot of a world built for artist Karim Jabbari to showcase their work with Arabic calligraphy. Large, golden Arabic lettering floats above the ground as golden particles float around it. There's a multi-tiered fountain in the background.
Arabic calligraphy hovering among sparkly particles

The world is still active and you can visit it at any time: (and you should) Vertical Horizon

A screenshot of the Vertical Horizons with a group of avatars in the foreground awaiting a tour of the world.

The Show Must Go On began as a more traditional talk in an amphitheater but then led to world hop where we could experience some of the venues that were created for live music performance.

A screenshot of an amphitheater with an avatar giving a talk at the podium in the front and a large presentation screen behind them.
An amphitheater and audience during The Show Must Go On presentation at Immersive X 2025

Each world was purpose built for a musical artist to fit their style and aesthetic. The spaces live on after the live performance with a recording that plays and can be enjoyed any time.

A screenshot of the Oxymore world in VRChat with avatars dancing as a pre-recorded performance plays on the stage.
A world built for Jean-Michel Jarre as an homage to Pierre Henry.

My favorite venue was Oxymore (named for the album) and, as we learned, there were custom avatars created for the live performance so the guests could dress to match the world. VRROOM built this world for Jean-Michel Jarre to perform an homage to Pierre Henry.

Back to the “Real”

IAAPA is a week long conference for the theme park, attractions, etc industry that takes place at the Orange County Convention Center in Orlando, FL. So there I was, in Orlando, FL at IAAPA, (I had actually started writing this in the middle of IAAPA but I’m only finishing it up now) among people dressed as people to convince each other that they are trustworthy and safe to work with. The week packed with meetings, gatherings, and sessions to attend because a lot of money was spent to be there and thus every minute should be filled to justify the expense. This is the way of the business conference.

A screenshot of my calendar from IAAPA week. Details blurred to protect the innocent.
A glimpse of my calendar from IAAPA week. Details blurred to protect the innocent.

Like any industry gathering you are presenting your best self all day to everyone and anyone. There are brief breaks and maybe you can find a corner where you can stop smiling for a minute but for the most part you are engaged and hyper aware of how to present yourself. Then a quick change of clothes and off you go to a party or demo where you continue until late in the evening. You drink a little but not too much (you hope) you chat, you laugh, you recount that one time that project went horribly wrong but then in the end you made it work. Then you go to bed and wake up and do it again. Even the extroverts are exhausted after a couple of days of this.

A photo of a panel presenting at IAAPA 2025. The session was titled "The Winning Formula: Combining Creativity and Data to Craft Scalable Immersive Experiences" which is a mouthful. The panel is 4 white males which continues to be a problem.
Photo from “The Winning Formula” session at IAAPA 2025

But we keep doing it because being in a physical space with other people is still hard to replace.

Addition not Substitution

This is not about which is experience is better. They both have reasons for being and they are both valid and worthwhile. Humans have evolved to exist together in physical space and replicating that to any level of success is quite a feat.

There is a comfort in the casual pace of the virtual conference. I’m not rushing to be anywhere, there’s no travel time, no crowd to push through. I put the headset on (or just load up the world on the computer) and teleport to the venue. If I’m early I can chat with others and find a seat, or stand (my avatar can stand forever). My avatar handles the presentation of me. I don’t feel pressure to put on appearances. I can sit comfortably in any room of the house. I can enjoy and absorb the content in a manner that suits me at the moment. When the presentation is done, more often than not, the presenter can open a portal to the very project or world they have discussed and we can all hop through and experience it instantly. When I’m ready, I hop to the next one or jump out and take care of work, or lunch, or home stuff.


There is excitement in the madness of the physical conference. A weeks worth of potential bottled up and waiting. Who will you see? What surprise thing will you happen upon on the show floor? Why is so and so hanging out with so and so? You can’t know who is immediately around you and thus you don’t know what interesting conversation you may end up having. Walking around a show floor you can smell, touch, taste, and maybe even climb a thing. The serendipity is the true secret sauce of the physical gathering.

A virtual conference handles the talks, presentations, and scheduled events equally as well as a physical gathering. If it is well designed the sights and sounds will convince your brain that you are in another place among a crowd of people with similar interests. If that’s all there was to it the virtual gathering should be the preferred format for any conference. What the virtual has yet to capture are the moments between the scheduled sessions. Those chats in the hall, running into someone at a local lunch spot, grabbing a coffee for a colleague you haven’t seen in a while. The friction of the real world creates moments that connect us.

We can have both. IAAPA is a conference for the business of in person entertainment. Theme parks, water parks, museums, zoos, FEC’s (family entertainment centers), etc etc. It’s all about going to a place and doing a thing in the physical world. That’s not to say that there could not or should not be a virtual component to it. Not everyone can travel to Orlando for the week. The show is vast and could be hard to navigate for some. The show is overwhelming and may be overstimulating. It’s entirely possible for IAAPA to offer a virtual ticket to a virtual space where education sessions are streamed and people (and their avatars) can book virtual conference space. But what about…? No…video conferences aren’t the same as being in a shared, virtual space.

Immersive X could be virtual first with a physical component for those who can make the trip. The whole conference is organized in Europe and on CET. I’m thankful I was able to attend (even if I did have to get up at 6am PST sometimes) and if it wasn’t virtual I certainly would not have made it to Europe. It’s conceivable that those more local to the EU could gather in person and attend either via a streamed feed from the virtual world or, better yet, a shared physical space that mirrored a virtual space. In person and in VR. Complicated…maybe, but doable.

So what?

The fortuitous timing of these two shows happening back to back provided a fascinating glimpse into how humans connect in real and virtual space, where the strengths and weaknesses are in both scenarios, and how both could improve and exist simultaneously but also as hybrids of each other. I’m excited for more virtual conferences to emerge and I’d love to see long running physical conferences like IAAPA embrace a virtual component.

Phones in the Park…

This is the second of two recent articles I wrote for The Bezark Company to be published in Blooloop. The published version is here: https://blooloop.com/theme-park/opinion/phones-in-theme-parks/

Theme parks are built as transportive worlds that bring us together and lead us on adventures with our families and friends. As technology and devices proliferate in every aspect of our lives, itโ€™s increasingly noticeable that groups of people, in a place where they are meant to be together, manage to spend much of that time apart. Phones are out, earbuds are in, watches are flicked and tapped as we check in on the world weโ€™re momentarily trying to escape. As experience designers we want guests to forget about the outside world and really immerse themselves in a crafted world, whether a theme park, museum, pop up, or something in between. Not everyone buys in, of course, but the hope is that the staunchest objector to reality suspension will crack a smile here and there. The portals to reality are in the palms of our hands and it encroaches upon the designerโ€™s well thought out intentions. 

Tech develops rapidly. In the same way that home theater systems have sufficiently mimicked the movie theater experience, increasingly complex and immersive experiences are available at home. As virtual reality emerged it required expensive, heavy hardware and complicated installation. As the tech evolved, costs came down, hardware got lighter, and setup became easier. As VR was enjoying a brief moment in theme parks it was also becoming attainable to the home user and, as it turns out, itโ€™s a better home experience. In the never ending search for new ways to tell stories and enthrall guests, itโ€™s tempting to enlist the latest gadget to draw them in. Our guests arenโ€™t asking us to bolt the latest tech fad to our existing attractions. Look at what the guests are already using and meet them there.  

The present day dilemma is the tiny computer, and its accessories, that we all carry in our pockets. At the first sign of indifference the phone is unlocked and the guest is lost in an endless scroll ofโ€ฆwhatever. Along with the phone come earbuds further sealing guests off from the rich world around them. Many attempts to incorporate these devices are being made, with varying results. In fact, itโ€™s almost impossible to spend a day in certain parks without the phone to guide you but it can be much more than a fun management system. 

A great place to start is the queue. Whether itโ€™s a traditional maze of chain and stanchions or a pre-show waiting zone, when the guests are asked to wait, the phones come out. (even after we spent all that money on that incredible pre-show media loop). Every queue could offer a custom experience that can be accessed via phone. While asking guests to use a specialized app adds friction, there are options for rich, interactive experiences that donโ€™t involve building software for specific platforms. This makes updating and upgrading easier for the operations team and seamless for the guest. These added experiences must be easy to access and easy to engage with while expanding the world and adding to the story. 

Letโ€™s give the eyeballs a rest and really immerse the guests as they move through the world. Thereโ€™s at least one person in every group who has their earbuds in during their entire visit. Audio based experiences are highly compelling and underused. Build in soundscapes that can only be heard by those who choose to hear it. Tell new stories. Tell old stories. Enrich the world thatโ€™s been built. Audio is a powerful tool for delivering narratives and with location based triggers there are opportunities for some creative wayfinding.

Tie these mini experiences into the rest of the park and thereโ€™s a more compelling reason for the guest to engage. The stories donโ€™t have to connect to each other but those that do deepen the relationship between our built worlds and the guest. Beyond the story opportunities we may entice folks to engage with virtual gifts, discounts on food and merchandise, connection to a larger game/story, opportunities to partake in exclusive eventsโ€ฆ Guests are going to use their devices no matter what so letโ€™s give them a reason to interact with the world weโ€™ve built instead of only using them to fill the void before the loading zone. 

Attempts are being made. Augmented reality overlays and selfie filters are fun snacks but there must be more. Disney created the DataPad to enhance the guestโ€™s experience and help them feel more integrated into day to day life on Batuu when they opened Galaxyโ€™s Edge. There we are able to interact with the physical surroundings and perform quests that add context and backstory to the attractions making them feel richer and more alive. It is a great example of how to use a guestโ€™s device to further engage them and expand their experience. It lives on, buried in the Play Disney app, though virtually unchanged from opening day. Meow Wolf also attempts to engage with their own app though itโ€™s not required to enjoy the experience, it broadens and connects the worlds. The unspoken promise of something like these companion apps is that they will evolve and grow over time, which is important not only for returning guests but to keep the world active and alive.   

This all walks a fine line. While we donโ€™t want to encourage people to be buried in their phones all day, it is our present reality. Future consumer tech is on its way and there will be more devices to further separate guests from their experience. Thereโ€™s a possibility that weโ€™ll all be wearing some version of augmented reality glasses in 5 to 10 years. The screens will be on our faces and the temptation to dip back into the endless pit of the online world will be extreme. We need to lay the foundation now for what guests can expect alongside the twisted steel and fiberglass and there needs to be real effort behind it. Treat this weird virtual space as another show or attraction and make a commitment to support and evolve it to measure real results. Now is the time to craft stories and games that exist between the two minutes of thrill that people are seeking. Fill the liminal spaces of your park, museum, theater, with a story layer that keeps guests engaged and, yes, probably spending more money.

Computers are Eating My Job!

This is the first of two recent articles I wrote for The Bezark Company to be published in Blooloop. The published version is here: https://blooloop.com/technology/opinion/generative-ai-potential/

Thereโ€™s a scramble within the creative community to understand the rapid rise of machine-generated content and what it means for the people who make a living crafting stories and building worlds. Most people call it Artificial Intelligence (AI) โ€“ it’s not. Rather, the words, drawings, photographs, songs that are being pumped out are the results of large language models (LLMs), and the creative world has been caught off guard with their sudden emergence, quick advancement, and seemingly boundless generative abilities. 

The themed entertainment industry consists of some of the most creative folks in the world, and this new development has, understandably, unnerved many of them. Every new technology that enters the mainstream brings with it a certain amount of fear, uncertainty, and doubt. The machines know nothing and understand nothing but produce convincing and sometimes impressive material based on our input. They are pumping out images, music, video, and 3D models with the most minimal of text prompts. This endlessly generated art seems to be getting better every week and thereโ€™s real concern that the value of human creativity is going to plummet. 

In 2017, a handful of Google engineers released the Transformer architecture to the world. Seven years later, all of the latest text, image, audio, and video generating machines are built upon Generative Pre-trained Transformers (GPTs) utilizing large language models to whip up content in seconds. Twenty years before the advent of Transformer-based machine learning, IBMโ€™s Deep Blue beat chess Grandmaster Gary Kasparov. A computer had beat the best chess player in the world. The chess world reeled, assuming there was no point in humans playing any further. Donโ€™t worry โ€“ people still play chess and now use these powerful machines to help develop new strategies. In 2016, Deep Mindโ€™s AlphaGo beat top player Lee Sedol in a series of Go matches. Go is a complicated game with an impossible number of possible moves. The Go world reeled at the humanโ€™s defeat. Donโ€™t worry โ€“ people still play Go, and the machine-learning algorithms have taught us new strategies and even resurrected old strategies that were thought to be outdated.ย 

It feels inevitable that machine-generated art and ideas are going to flood the world, but if we can learn anything from the Chess and Go communities itโ€™s that these machines are just tools. Like the printing press and desktop computer before them, they are assistive and empowering. Remember, the machines are not thinking. They know nothing, but they are fast and can aid in ideation and prototyping in ways we havenโ€™t seen before. Just as they have with each technological advancement, the landscape of work and career will change not only in the creative fields but across all industries as the possible applications for machine learning are wide reaching. Creatives should not fear the generative capabilities of machines but harness them. They help us fail fast so we can succeed sooner.

While itโ€™s great for headlines and flashy news bits, generative art is the least interesting thing that will come out of all of this. There’s growing concern that the focus on LLM-based technologies is pulling resources from real advancement. There have already been major announcements and breakthroughs for protein folding, material discovery, molecular dynamics, medical imaging, understanding whale language, etc. The ability to feed incredibly large datasets into these algorithms is a boon to the scientific community and should prove beneficial in the not-too-distant future. 

None of this comes without challenges. Jobs are going to shift as we adapt to these new tools. Energy consumption while training and running these models is a huge concern. We may very well be in the midst of another hype cycle and these advancements that feel like huge leaps may hit a yet unforeseen barrier that stalls progress for another 10 years. Techno-optimists see solutions coming to the energy problem. More efficient hardware and increased low-to-no impact energy generation might make this technology more sustainable. Whatever happens, companies should be proactive in educating employees about these available tools and how to use them effectively, securely, and responsibly. 

Yes, the big players creating these LLMs have a lot to say about Artificial General Intelligence and the inevitability of the machines doing almost everything, but they need to pump up that inevitability to satisfy investors and markets. Ignore their bluster. This may be the beginning of another tectonic shift in human/computer interfacing, but we will all do well to focus on whatโ€™s available now and how to use these generative tools as another brush, instrument, or pencil, in our trusty and worn backpacks.

XShot VidCon Gallery

Spring of 2022 I was approached to build out a system for a “digital shooting gallery.” XShot designed and built a booth that would contain two shooting galleries using their foam dart weapons.

A photo of the XShot shooting gallery booth at VidCon 2022. It's mostly white with Xshot branded graphics on the sides and a clear case displaying all of the xshot dart shooters.

The galleries would run simultaneously and groups in the galleries would score points for every target they hit. Scores were only kept for each group (not individuals). Outside of the galleries there was a monitor mounted above that displayed the high scores for groups that came through the galleries that day.

The two galleries had unique designs. Gallery 1 was stylized to feel roughly like an 8bit environment. The monitors were masked to give them unique shapes and the characters on the monitors were 8bit style sprites that would bounce or slide around.

A photograph of the first shooting gallery done in an 8bit style. There were monitors hidden around that displayed 8bit characters. If the player hit the monitor they scored a point for their team.

Gallery 2 was designed like the back alley of a city. The targets were signs or graphics one may see in the big city.

A photograph of the second shooting gallery done in a back alley, city style. There were monitors hidden around that displayed graphics and characters. If the player hit the monitor they scored a point for their team.

Each gallery would take up to 4 players who could choose the style of weapon and shoot constantly for a limited time. The game was started by a host who would hit a hidden button that set the system to start after a brief countdown. Each monitor had a Raspberry Pi attached to the back of it along with a small vibration sensor. The sensor was sensitive enough to detect when the monitor was hit and register the score with a central computer running a scorekeeping app built in Unity. Each Pi was wired into the network so data was reliably sent to the computer. It would have been madness to depend on wifi on a trade show floor. The main computer was also displaying the high scores to a monitor above the booth and two monitors that players could look at upon exit from the galleries.

There were 28 Pis used across the two galleries and the game ran for 3 days, the length of VidCon 2022. Here’s a walk through of the game as it was played:

Seconds to Last

A photo of Cynthia Minet discussing her sculpture "Seconds to Last" to a group of gallery visitors
A photo of the rhino sculpture fully lit from within.
A photo of the rhino sculpture lit from within half way through the lighting sequence.
A photo of the rhino sculpture with its internal lights off.

In a departure from previous materials, Cynthia Minet used discarded tents to create this life size representation of the nearly extinct Northern White Rhinoceros. In this installation, Seconds to Last, Cynthia wanted to use light to convey the disappearance of these huge beasts.

Due to the volume of the sculpture and in an attempt to avoid hot spots I used LIFX smart bulbs, 7 of them whose colors reference the seven energy chakras, to provide the internal lighting. To program the sequence of the bulbs fading off I installed Home Assistant on a Raspberry Pi and set this up like a typical smart home automation. This allowed us to easily program a time of day sequence and let it run during gallery hours but turn off for the night.

A photo of the clay model for the rhino in the foreground and the finished piece in the background
The clay model that Cynthia made with the finished sculpture in the background

Visit Cynthia’s Seconds to Last page for a full description of the work.

Jacked: Panthera Atrox

Another collaboration with Cynthia Minet. Jacked: Panthera Atrox is the latest of her incredible animal creations built from reclaimed plastic and this time animated with a similar mechanism to a pump jack used to push oil out of the ground. The lioness’ head tilts up and down with the rhythmic movement of the pump arm.

A photo of a sculpture made of reclaimed plastics and LED lights. It's a representation of an extinct lion, Panthera Atrox. The photo shows it on display in the window of the Craft Contemporary museum in Los Angeles.
Panthera Atrox on display at the Craft Contemporary in Los Angeles

I worked with Cynthia to install the LED lighting throughout the sculpture and programmed the colors using an Arduino and the FastLED library. This allowed us to tune each individual LED to the color based on the light’s location within the sculpture.

Jacked was on exhibit at the Craft Contemporary and stood in the window opposite the La Brea tar pits.

3D printed skulls lit from below with projected, animated text beside them

In addition to the Panthera Atrox Cynthia had printed three skulls of animals that have been lost to extinction. We installed mini projectors above the skulls and I animated the text displayed by each skull to slowly disintegrate in After Effects. The text is from Charles Harper Webbโ€™s poem, โ€œThe Animals are Leavingโ€.

For more information on the installation visit Cynthia’s Jacked: Panthera Atrox page.

Collaboration

It’s been quiet in my creative world…but not uneventful. Over the last couple of months I’ve had the pleasure of working with Cynthia Minet on her upcoming installation, “Migrations.” Cynthia is an accomplished artist and her creations are constructed from post-consumer plastics and LED lighting. Migrations depicts six Roseate Spoonbills in varying stages of flight. With this sculpture Cynthia hoped to push the lighting a little further than she had in previous work.

There were two goals.

  1. Have greater control over the color and brightness of each LED
  2. Add movement to the sculpture by animating the LEDs
A closeup of an LED behind a magnifying glass with its wires splayed out as I do my best to connect it to another strand of LEDs.

After some initial conversation a third goal popped up. If we’re going to be programming these LEDs could we also add some motion activated audio to immerse the viewer in the world of the spoonbill?

After some testing we settled on the P9813 LED pixels. The plastic casing around the actual LED helps diffuse the light. The fact that the strands run at 5v was an added bonus. 

To program the lights and the motion based audio I knew we were going to use something in the Arduino family. The spoonbills do not have a ton of room inside of them so we opted for a Trinket to run the lighting and a Trinket Pro to run the audio system. Ideally everything would run off of one board but that just wasn’t feasible here. This also cut down on the cost for each sculpture. 

The next few posts will get into the details of the wiring, programming, testing, and installation of the lighting and audio systems. 

If you’re around this weekend (Oct 21 and 22) you can see the sculpture in its current state at the Brewery Art Walk. Art Walk runs from 11a-6p both days.