Our divisions provide turnkey themed entertainment experiences →

  • Treehouse
    • Treehouse manages our master planning attraction design, executive production, and special venue development. Visit Treehouse
    • background image
  • Digital Media
    • Digital Media develops media, augmented reality and interactive experiences that ignite the imagination. Visit Digital Media
    • background image
  • Licensing
    • Licensing provides access to our unparalleled attraction systems, media content and intellectual property. Visit Licensing
    • background image

New Technology Experiences

New Technology Experiences

Experience Imagination | Episode 2

Show Host: Abhinav Narain – Project Coordinator

Studio Guests: Saham Ali – Director of Technology, Craig Barnett – Senior Pipeline Technical Director, David Consolo – Technical Manager, Jesse Allen – Editorial Director

Listen to New Technology Experiences on iTunes, Spotify or Google Play

The way we experience entertainment is always evolving.

And at the heart of that evolution is new technology. On this episode of Experience Imagination, we sat down with our own Tech Developers and Coordinators to find out more about what the latest science and gadgets are capable of and to where they’ll transport us next.

About the Show:

Experience Imagination is a themed entertainment podcast presented by Falcon’s Creative Group. Every episode covers a new topic discussion with a panel of creative professionals. Tune in and subscribe on iTunes or GooglePlay.

New Technology Experiences Transcript:

Cecil: You're listening to Experience Imagination, a themed entertainment design podcast presented by Falcon's Creative Group. Every episode we discuss a new topic with a panel of creative professionals. Hi, I'm Cecil Magpuri, President and Chief Creative Officer of Falcon's.

Abhinav: Hey everybody, this is Abhinav, our moderator for the episode. Hey Cecil, how's it going?

Cecil: Hey Abhinav, how you doing?

Abhinav: Doing good.

Cecil: Awesome.

Abhinav: Today's topic is on new technology.

Cecil: New tech.

Abhinav: In the themed entertainment industry.

Cecil: Yes.

Abhinav: Why is this an important topic to discuss?

Cecil: New technology is very important, obviously to Falcon's Creative Group, because we're always in the cutting edge, sometimes bleeding edge. When it comes to trying to introduce immersive experiences, technology is a key component to that.

Abhinav: Absolutely. And who's joining us in our conversation today?

Cecil: Today, Saham Ali, is our director of technology. He's gonna be joining us, obviously, that's a key role.

Saham: Hey, how's it going?

Cecil: David Consolo, also is gonna be joining us, and he's kind of heading up our R&D and gaming division.

David: Hey guys.

Cecil: Craig Barnett is our technical director and pipeline. He's joining us.

Craig: Hey everyone, how's it going?

Cecil: And Jesse Allen's gonna be joining us as well, and he's the director of editorial.

Jesse: Hey, thanks for having me here.

Abhinav: Perfect. Well, we'll go ahead and dive into that conversation, and at the end, we'll circle back with you, Cecil, to get any final thoughts.

Cecil: Perfect.

Abhinav: The first question to start off a conversation is, what kind of technologies are you all most excited about right now? Either ones that are currently available, or ones that we see heading over to us from the horizon?

Saham: Man, there's a lot of stuff going on, as we can tell every time we check out these conferences. Primarily for me, the thing that has really changed a lot of what we do, and a lot of what the industry does, has been the advancements in our CPU technologies that are available, right? Video game cards basically, are making it, for us to do things, that before, just not even feasible on traditional CPU systems. And the ability to scale way better than we've been able to do in CPU systems.

Saham: Game engines, have primarily taken a huge advancement with these technologies. Now, that game engines can basically produce quality in these graphics that are effectively as good as an offline renderer.

Abhinav: Absolutely.

Saham: It's really getting hard to discern when what is real-time, what is offline rendered. But, there's still challenges in a real-time world. You have to put everything together a little bit different, and that's less of a restraint of CPU technology, that's more of a we're just kinda trying to get these game engines up to speed of doing these really complex things. But again, it's real-time.

Saham: And obviously, the things that come out of that would be, like we talked about light field displays, these game engine technologies. Being able to do renders that traditionally would have taken us 24-36 hours to do, down to minutes, or hours. It's just paving the way for more advancements and being able to turn out more higher quality visual effects, and entertainment stuff.

Jesse: And I would say, for me, the thing that's got my interest is mixed reality. And positional tracking of physical objects. And I think those two worlds will kinda come together, not in the traditional sense where you put on a headset and see virtual objects in the physical environment, but eventually that will be at a point where projection mapping will do a lot of that, and make it completely accessible for every kind of person to walk in, have a mixed reality experience, and it react to where they are in physical space in real-time. I think that's kind of the most exciting thing.

Abhinav: There's a question of separation between you and the content that tends to come up when we talk about virtual reality and mixed reality. David, you have any thoughts?

David: I'm very interested in holographic displays. Every time there's one that comes out, that doesn't require me to wear glasses and I can see into a box and see the virtual world inside of this entity, it's always really cool. And there's some really cool advancements done in the industry right now through lenticular displays, or pieces of film that flap back and forth super-fast, that really make virtual objects appear stereo and in actual volumetric space. I'm always looking for that, and that's the kind of stuff that I think is underused in a themed-entertainment industry that you could actually have everywhere as an interactive entertainment display, just for guests to interact with it is always a really cool-

Saham: Segueing off of what Jesse said, I think it's really cool ... I mean, the tracking technologies themselves, being available to the consumer, David, you're able to leverage tracking technologies that were basically a result of VR, and using it in a non-VR way. That's awesome. These type of technologies are easier now. Where, before to do any type of tracking, I remember, quarter million dollars for a mocap system.

David: Oh, those were huge.

Saham: Don't need that anymore.

David: Yeah. So accessible nowadays. These pieces, these components that were originally made for VR are now like a couple hundred dollars, and you can just use those components by themselves. Integrate them into projection-mapping systems, and mocap systems, and it's just ... And they're even more efficient than these extremely expensive mocap systems.

Abhinav: Yeah. NAB, guys that make the stereo cameras for VR headsets so people can get their fingers and stuff in it?

Saham: Yeah.

Abhinav: That same technology, they're wanting to just stick on to cameras, and use it for the film guys to do real-time green screening. To have depth information, to pull a key when there is no green screen. That's just another application of that same technology that was made for a completely sister industry effectively, right?

Saham: Right.

Craig: Yeah, I'm really excited for ... With all this VR stuff that's coming out now, and being able to really put you into these different, unique worlds and everything ... And how much computer graphics have advanced, everything already feels so real when you're in there, I'm just really excited for all the advancements and technology that actually lets you feel, or interact physically with whatever's in there. Like haptic suits, and all that kind of stuff.

Craig: I already feel like visually I'm there, but physically I'm not. So I'm really excited to actually feel like I am the character there. Rather than feel like I'm controlling it. 'Cause right now when I'm playing a VR game or something like that, I feel like, that I'm not that person, that I'm just controlling-

Abhinav: You're like looking through-

Craig: ... Kinda like Pacific Rim style, or something, right?

Abhinav: ... Sure.

Craig: That I'm just mimicking whatever it's doing. But I wanna actually be able to feel like, if i pick up something I wanna feel it. Or if something hits me I wanna feel something.

Abhinav: Just for those of us who are not as well-versed. What is the haptic suit?

Craig: Basically, it's like the popular movie, Ready Player One, that just came out, a suit that would let you feel via electrical signals everywhere on your body whatever it is your experiencing. I guess a similar experience is, touchscreen phones now have haptics in it to let you know that you've actually pressed a button on the screen, you'll feel the little vibrate or something to know that you're actually touching this.

Craig: So similar concept, but you're gonna have different strengths of it, and make it feel like you're actually holding something in your hand that could be soft, could be hard, could be sharp, you actually feel the edges of whatever it is based on the electrical signals it's sending through your body.

Abhinav: It could get that precise down onto-

Craig: I mean, that's what they're working on getting to. And, now obviously, something like the edge of a knife is a very, very sharp object, and I'm sure that's something they're still working on.

Saham: ... Haptics got really into somewhere where we're looking at that, the ultrasonic device. Because combining that tech with, let's say holograms or whatever, I mean, literally imagine if you're looking at a hologram, whether it's through a headset or it's somehow magically projected in front of us now, but you can actually touch the hologram. And the person on the other end of that hologram can now feel you touching them in that same concept.

Abhinav: Virtual handshake.

Saham: Yes.

Abhinav: Wow.

Saham: That's kind of where things could be headed with these type of technologies merging together.

Abhinav: That's incredible.

Craig: I saw an omnidirectional treadmill that-

Saham: Yeah, exactly.

Craig: ... It wasn't even a flat surface, it was actually moving parts. And as you slowed down or stopped ... Like if you just immediately stopped, you would actually fall forward because the ground is still moving, just like in real life. If you're running, you can't just immediately stop. It's definitely a lot of cool stuff on the horizon in that area.

David: We tried an experimental haptic feedback system for your hands. It was an array of transducers, or mini speakers. And basically, you put your hand over it and the skin on your hand is so sensitive that you're able to feel the wavelength, the sound emitting from it.

David: The ultrasonic. And basically, you're able to feel like little micro changes. And you're able to feel shapes in it, and that's alongside with the leap motion basically. But, I'm really excited for that kind of tech too, where you're able to actually just now have a system that can replicate those senses for you.

Jesse: Playing on the whole leap motion thing, how you actually control and experience this is changing. I mean, we grew up in the generation of here's your joysticks and all that stuff. And the younger generation's already dealing with tablets and cellphones and stuff. I mean, it's leading to the point where there is not a controller, there's gestures and how you interact with objects physically. And that's super cool, because now, again, you've made something that was kind of only accessible to people who have those skills, those joystick skills or whatever, and now they can go play as if it was a real world.

Abhinav: Real quick. Jesse, what is leap motion?

Jesse: Leap motion is basically a gesture tracking device. I'm trying to think. It's about half the size of a credit card. And it sits either on the table, or what they've been using it for lately is, tracking hand motions from a virtual reality device. So a leap motion actually attaches to the front of the lens, and it shoots out ... It's an infrared beam to just kind of get the light on the subjects, and it can track both your hands.

Jesse: When you open up an application to edit it, you'll actually see finger data and gesture data, and it recognizes several commands. Thumbs up, hands down, hands up, hands open, those kinds of things. From that, you can use any of that data to trigger pretty much anything that you want. In some of the experiments that I've used it with it, I've had it do things like, if you close your hands then it changes the colors of an object.

Abhinav: You have that variability.

Jesse: Yeah, and you can make it do whatever you want.

David: What I find so cool about a leap motion is, originally you were supposed to place this object on the table in front of you, and it was facing upwards. And basically, you're able to interact with a display only with your hands. And then someone, either in a forum or in the actual studio itself, they decided once Oculus came out and VR headsets came out, what if we put this system on the headset itself facing outwards?

David: And the studio reacted, made it faster, and now, it's an incredible gesture system for VR. If you try VR with the controllers, it's one thing. You try VR with leap motion hand gesture technology, it's a whole different experience. You can pick and place objects and your hand is really there.

Saham: Yeah, in the past, I'd actually given a talk literally about this. And it's been about I/O, right? How we've been interfacing with computers for the last 30, 40 years. It's ancient. It's keyboard and mouse.

Abhinav: I feel like the same thing is kind of happening with the voice activation.

Saham: Sure, yeah.

Abhinav: You don't always want to talk out loud, to get what you want sometimes. Sometimes you want it to be a lot easier than that.

Saham: Yeah.

Abhinav: But it sounds cool when you see it in the movies.

Abhinav: So, a lot of this technology that we're talking about has so many different applications, across so many different industries. Where does it come into play, when it comes to themed entertainment? Because a lot of these experiences are so personal, there's a lot of scope and a lot of scale, like a haptic suite, but just for one person. How far away are we from being able to incorporate that into a themed entertainment experience? Is it even feasible right now?

Saham: The technology's out there. It's, how do we apply those technologies for the use case? Again, clear example, David made an amazing use case of a technology, that was met for VR, and use it in a non-VR application.

Saham: So, it's just finding the right, creative thing that we're trying to do, and finding the right technology that fits that. Now, we all know, VR is a little isolating, and AR is kind of like the Holy Grail of this stuff. So, if we are going to start creating these AR attractions, what other ... Does Omni Directional treadmill, and what other technologies do we have to kind of put together, in sort of a holistic system, to create this affect.

Jesse: That kind of leads us into, one of the things that's kind of complicated in themed environments is, a lot of times an audience is coming down a track or they're walking down a specific hallway or something, and you have to do a lot of corner pinning type of techniques, and that really leads us into a really cool technology that's coming out right now, called light field.

Abhinav: Yes, brought up Life Field a little earlier. Let's talk a little bit about what exactly that is.

Saham: Sure. Well, you can look at it in two ways, but if you're looking at light field in the realm of the way we have to capture it, it's basically being able to capture something in real life, from multiple vantage points, so you're not just looking at it in a 2D or a mono scopic window. You're actually looking at it from multiple camera angles. An example being, we all have our pixel phones and Google phones, and they introduced this concept where you can take a picture, and then change focus after the fact. That algorithm was kind of like, the light field to light, right? Being able to change focus, after the fact.

Saham: Lytro comes out with a camera, kind of ahead of its time. That was a light field camera, that allowed you to do that with a single lens. They eventually realized it was really promising, and moved on to a bigger cinema camera, and now they're capturing, I think it was 96 different vantage points, from one angle, which then, basically makes it so that you have like a volumetric view of a 2D image. You can allow the person to move their head around.

Saham: Now, that's capturing that type of volumetric data, and the output is kind of what David's talking about is, holographic displays, things that are moving really fast, but how do you now allow the user, and multiple users, be able to look at that volumetric data, from any vantage point, without technically knowing where they are, without having to force them to wear a headset?

Abhinav: Is there an example that currently exists?

David: I actually met with a couple guys at GDC, that were displaying this box that was a holographic box, that used lenticular, and a lenticular array of pixels, and it basically was kind of like an eight to 10 inch box, and you're able to see inside of it, and it has 32 different layers, basically, and in real time, you can input a game engine, like Unity, and able to feed the video of the game engine into this lenticular display, and as a user you're able to see 32 planes into depth. It was really cool. It was the closest thing I've seen to a real time. I mean it was real time. It was real time holographic display. It was really impressive.

Jesse: For the listeners out there. If you want to see this, like in action, one of the cool things that just came out was GoPro did a volumetric photograph of the deck of the Space Shuttle Discovery. So, you're standing where the crew sits in the Space shuttle Discovery, and it's VR experience, but it's showing this technology. So, you can ... It's a still photograph, but you can actually change your body and head position, and look around the seats and get closer to the controls and stuff. So, it feels less like a photograph, and more like a physical environment. They have several photographs in this display. Was it GoPro or Google that released this?

Saham: It was Google that basically bought up a company, that was kind of leading the way, in processing these data types and creating this light field thing. Yeah, the space shuttle, a clear example. Amazing, right? What's the use case? The biggest issue we've seen with live action? It's still 2D imagery.

Jesse: Right.

Saham: Like everyone's talking about, "I want to go to a sporting event. I want to be able to watch the sporting event in VR. Even in stereo, but it's flat and I can't move my head, so it's like one vantage point." Well, this light field technology, this volumetric VR, effectively, is going to allow film makers that are still wanting to do live action, people running around on set, and capturing that data in such a way that the user can look at this perfected shot, from basically wherever they want it.

Saham: Like, I want to look around the corner, when we didn't plan to shoot looking around the corner. Well, I have the data for it, so now I can allow the user to do that.

Jesse: Well, in a themed environment, that's super cool, because now you have an audience of, let's say there's 40 guests standing in a room, and they're able to look at this kind of data, all from different vantage points. Every single person has a completely different experience, based on where they're standing in the room, and you can use live actors, and you can use the virtual world tied in to it, to make it feel even more realistic for them.

Craig: So, we're talking a lot about some of these technologies, like a lenticular display, and are there any like, actual use cases, that the average person at home ever experiences, kind of things? Like lenticular displays. Isn't that what the 3DS uses?

Group: Yes, yes.

Craig: So, a 3DS is a good example. I recently got a 3DS, which I know I'm a little behind in the time for that, but just looking at the 3D in it, it really is like so captivating and fascinating to look at, just because there's like ... I'm not wearing any glasses. I'm just standing there looking at the screen and seamed up to the screen, and it's almost unreal, because I feel like the average person, in their day-to-day life, you don't have experiences like that. The 3D that most people know, is when you go to the movie theaters, which you are putting on glasses for.

David: Now, the crazy difference between a DS, and displays nowadays is that the DS is one single user, because it actually tracks your face, and it's like a wiper, and it actually moves in the direction of your specific face. So, another user wouldn't be able to experience what you're experiencing, but in a themed entertainment system, you need multiple users to experience it as easy as possible, and nowadays, these holographic displays are allowing multiple users to be able to experience that all together. That's pretty cool.

Craig: Is that just because of the way the display is layered? Is that reviewed from basically any direction? There is no facial tracking needed anymore?

David: Yup. It's basically like how they used to do in the old animation systems, like Walt Disney. Basically, they would layer, just film after film, after film, just to get the paralleling effect, but instead of being film, it's actually pixels transparently layered across a numbered amount of layers, spaced a little bit as well, and in this case, there's a couple that are 32. There's so many different-

Saham: So, it's only probably going to get better, where the resolution of these slices effectively, that your volumetric, you know.

David: Now, the limitation is, you only have that much space. You can make it go closer and further, but you can't make it go that far away, because there's only so much space in there. So, there's still limitations to that system, but it's getting there. It's pretty cool.

Craig: I guess that's where AR kind of comes in to help push that farther than, that you can go any depth.

David: AR also uses ... There are new headsets now, that are using light field technology as well, and basically, whenever you have a system like a very simple AR headset, you're showing a single plain of light per eye. So, let's say you were to close one eye, as you're looking through this Air Headset. You would look at an object that's really close, and really far. Virtually, you wouldn't be able to see the difference, because there's no depth of field. There's no parallax. You would only get the sense of parallax when you have two eyes, looking at these images.

David: However, with light field display, if you were to close one eye, look at one virtual object, the object that's further away is blurry, because it's actually replicating what real light ... It's actually light entering your eye in sure depth of field. So, it just looks even better.

Abhinav: A lot of this technology that we're discussing today, revolves around sight visual effects, crispness, 4K, all of that. We talked a little bit about some tact tile elements with the haptic suit, and controller vibration. Have there been other advancements in technology designed for other senses, that we haven't talked about? Audio scent-tazed, I guess?

Saham: Smell-O-Vision?

David: Yeah, smell-o-vision. So, I went to GDC and I found a booth that displayed this tech, where it was an attachment for a VR headset, where you could actually smell scents, and it had a set of five smells that you had to refill, but it looked pretty cool and that's just on the next level of a merger. We used to have, I mean we still do. We have 4D dark rides and theater rides that have smell integrated, and lights, and it's just that extra sensory addition.

Abhinav: That kind of component is always so impactful, when it comes to immersion. Has the actual execution of it, or delivery of it, evolves over time? I don't exactly know if it's even something that needs to be improved.

David: Until we can actually create atoms, or change the atoms out of thin air, and change its composure and make it something that we can smell, that generates into real time, unfortunately, there's only so much we can do.

Abhinav: There's only so many ways you do the smell.

Saham: We have to find five ingredients that if you mix them any way, you could make the smell of a dirty diaper, into a banana.

Group: Yeah, right.

Group: Yeah, yeah.

Jesse: That would be awesome.

David: Some other cool tech that I've seen is the ability to have heptic feedback in your hands, as you grip objects. So, there's simplified controllers that actually ... It's just a trigger finger. So, you hold the basic controller, VR controller, as you would, but the trigger finger actually has force feedback. So, as you pull it, it will pull your finger backwards in the same amount.

David: Say you want to pick up a sphere in the world, as you push down on it, it will actually act in the opposite direction, to stop your finger from moving, or it could be squishy, right?

David: I've been doing the same thing with gloves too, and they're haptic gloves, but they're more a physical glove, so when you want to hold a squishy duck, it will actually squish, and they'll be motors that pull on your finger, and give you resistance.

Abhinav: As Craig was saying, the eventual goal is to get the pinpoint prick of a knife level of precision.

Craig: Yeah, to actually go feel everything you can, actual texture of things. That's one thing that I'm really interested to see the development of is it's one thing to be able to say, "Yeah, this is a sphere. This is a cube." But it's another thing to be able to say, "Yeah, this is bark. This is sandpaper."

Craig: "That's wet."

Craigl: Yeah, "This is wet. This is slippery." Those kinds of feelings.

David: Hopefully one day we'll be able to connect directly to our brains and stimulate the senses that way.

Craig: Right and that's how we could do smell too, right? Where you're saying, how do we actually create atoms out of thin air? Well maybe you don't need to if you can trick the brain into doing it.

Craig: But smell's one of those things though that how much do you really want to smell? Right though? Like, it's one thing for me that isn't really ... I don't enjoy that level of immersion for some reason. Like A Bug's Life.

Abhinav: That's a good point.

Craig: The stinkbug in A Bug's Life at Disney really bothers me. I actually don't like the ride because of that. Spaceship Earth. They have the burning-

Craig: The burning smell.

Craig: The burning smell, and I'm allergic to smoke. Every time I ride that ride and smell that smoke, I don't know if it's just psychological but I actually feel my asthma and my allergies kick in when I know there's no smoke there.

Abhinav: That's a really interesting point, because we allow our senses to be engaged with so many of the craziest things if you think about it. Loud jump scare noises, incredibly disgusting visuals, but it's interesting to think about. Maybe there's one or two senses that crosses a line for people, so that's not something I'd ever considered before but that's very true.

Craig: Especially if you go realism. Like if you had a zombie game with actual rotting zombies, no one would play that.

Abhinav: Yeah, if you smell that.

Jesse: Yeah, that might be a bit much. Of course one of the things being a guy who came from audio engineering as a background, 3D positional audio is incredible. We have kind of grown up in the era of cinema sound where you have 7.1 or 5.1 where you're basically swirling sound around in a circle, right? It's all in this linear format. Now you're starting to see things like Dolby Atmos where it's positional-based audio, so you can have multiple speakers. We're talking 10 or greater speakers above you, behind you, upper and lower because your human ear is so good at figuring out where sound is actually coming from. You can notice the difference of if you see a character on screen on the right side of the screen talking but it's coming out the center channel, your ear goes, "Yeah, it's coming out that speaker in the center."

Jesse: Where now some of the 3D positional stuff is like it'll move that by emitters to say it's 5% in the middle. It's 10% over on the right. It's another 20% above, or whatever, and now your ear is starting to believe that yeah, that sound is actually coming from that position in space.

David: The interactive guys and I, we play games where we need to hear footsteps and we need to hear where things coming from or shots are heard, so it's really great and it's a whole different feeling of playing a game in VR because you're able to turn your head and as you can hear the tail of the gunshot or whatever, a footstep, you hear the tail of it. You're able to direct yourself exactly how you would in real life and it's super informational.

Jesse: And on that note, you're playing in headphones and you're able to turn your head and not have the whole world sound basically follow it, but you're tracking as if it was in a physical space. So if you hear something behind you, you turn your head. It's now on the appropriate side. Not like your headphones are just reorienting the world.

Saham: Tools for this has been real interesting to watch last few years. I know realtime it's basically free, right?

Group: Yeah.

Saham: It's being done, but for the guys in film if you're shooting live action or were composing a shot, you don't necessarily always get the audio exactly where it needs to be and these tools that are coming out. I think I saw something from Adobe where you can feed it all the audio channels and then the video footages of 360. It analyzes it, and then basically creates a Doppler map of your footage of where the audio is coming from if you were in VR. Then you can literally just pan the audio in software and put it to where the source is supposed to be, and it fixes it. These tools didn't exist two years ago. This is happening now, so it's really, really interesting.

Jesse: We're also starting to see just the process of mixing an audio experience step away from the traditional mixing console or the mouse. We're seeing examples of literally you can be standing in a dome theater mixing with a set of virtual reality controllers saying, "Yeah, the sound comes over here," or, "This one's going to turn around and circle the audience this way," and certainly with gesture-based controls you could start to get into some really subtle, finite audio mixes and that's just a completely different way of approaching a project.

Jesse: So you're getting into systems like that. You're getting into stuff. We're already seeing examples with Adobe where they're doing procedural voice generation so you can feed it several different lines and create a word from literally nothing or just analyzing other data, and that opens up a huge door of variables when you start talking about interactives in a themed environment.

Jesse: Currently you get on a ride. You hear one narrative for your story. It would be cool especially if the guests are wearing any type of headphones that the narrative can be tailored toward their specific experience, and it's already active in a lot of modern video games where you're hearing dynamic speech being played out in real time. We just haven't crossed that line over into the theme park world where that story can change depending on the decisions that the guest makes.

Saham: Speech synthesis with AI real time, where it's headed. I mean, y'all remember when we used to be able to type in our notepad and have the computer read it off. It was this monotone AT&T voice, and now you can literally make up words that were never said with analyzing these audio tracks. It's pretty crazy, and then if I have a sample audio in a real-time game engine, I feed it 10 hours’ worth of audio from a voice actor and now we can create any possible dialogue in real time, no problem.

Craig: I mean with AI getting more advanced and now people teaching these machines how to recognize different images and stuff like that, I wonder when the next level would be where we can recognize objects now. Well now as we watch a video and we recognize these objects, let's associate sound with these objects we're seeing in these videos. Well then is there a point in time where we can get where you just give this software a video and it can create sound for the video you're watching based on "Yes, I see that as a gun shooting. I know what a gun sounds like, and that gun is right there so I can make that sound come from right there?"

Saham: And an audio model of this is what it's going to sound like in this room because this is-

Craig: Right. Where you don't have to do sound effects and mix where the AI can learn from ... You give it every movie that's existed so far and it can learn from all those objects and sounds, and I'm sure in the future it'll be something that we're looking at just like actually I just saw something very recently that someone had gotten the drawings from the Flintstones.

Saham: Yes.

Craig: And they have created an AI now that can procedurally generate episodes of The Flintstones and it will just piece together pieces of what they've done and create new stories that have never been told before. Sometimes it actually does well enough where you can't even tell that it's not an actual clip from The Flintstones. Other times you'll see really weird things where it'll be Fred's body on somebody else's head or something and it's tried to piece it together because it thought that's how it supposed to go and it wasn't. But it's one of those things. We're already getting that way for creating video content.

Jesse: Algorithms as a director. Yeah, that's fantastic technology.

Abhinav: I think that about wraps up the conversation. Thanks guys.

Group: Thank you.

Group: [Thanks for having us.

Abhinav: We want to thank our panelists again for joining us in this conversation. Cecil, any final thoughts?

Cecil: Wow, that was a wonderful conversation. It's intriguing to hear such dynamic dialog and to be candid, it's refreshing to think I had preconceived ideas of what was going to be discussed, and it's nice to see new ideas that got percolated up in the conversation that I hadn't even really thought about. Haptics is an interesting thing.

Abhinav: Definitely.

Cecil: The physical aspect of feeling things. When we do mass experiences you start to eliminate opportunities because of the difficulties of having every guest experience it equally, and haptics was never in mind to introduce, but in reality that's starting to come into play.

Abhinav: I was really excited about getting the conversation into new technology that's not just about evolving the visual side of things but about the full sensory spectrum of what you can experience.

Cecil: All the different senses.

Abhinav: One really interesting point that I wanted to get your thoughts on was the idea that there may be some senses of immersion that may go too far. There was a point in the conversation where we were talking about smell, or even taste. You may not want to be that fully immersed if the story involves zombies or something that could be-

Cecil: Burnt flesh.

Abhinav: Yeah, offensive in some way, and that was a really interesting storytelling challenge that I had never really considered.

Cecil: We designed the Hard Rock Vault, and we introduced scent in the CBGB's room and urine was one of the-

Abhinav: Oh, really?

Cecil: Yeah, fermented beer and urine and cannabis was part of the effort.

Abhinav: It's authentic.

Cecil: Yeah. So I thought it was over the top, but in reality it brought people's memories back to what it was like to be in these clubs. So in reality-

Abhinav: It could work.

Cecil: There might not be a limit, because sometimes you push the envelope to create and evoke an emotion. It may not want to be an hour long-

Abhinav: Of course.

Cecil: Of an experience, but snippets of these extreme moments. Even that might be okay depending on duration, so interesting dialog.

Abhinav: I guess it really comes down to the type of audience and making sure that you really understand them.

Cecil: And I think the director. The composition and how you compose these assets in telling a story and how frequent and how little and how large you make it is part of the effort as well, so all good tools.

Abhinav: Absolutely. Well, we'll see you in the next episode.

Cecil: Thank you. This was great.

Cecil: This has been Experience Imagination. For more information about this episode's discussion, be sure to visit our blog at FalconsCreativeGroup.com, and don't forget to follow Falcon's Creative Group on LinkedIn, Facebook, and Instagram.

Schedule a Conversation

We'll share our insights and help you experience imagination with your project.