During my research for the Shot on What article, I enlisted the powers of futurologist and film director, screen writer and author, Maxim Jago (FYI it’s pronounced jay go) to stare into the crystal ball of future tech. The globe-trotting Maxim is just back from Sundance where, amongst other duties, he was speaking as FilmDoo.com Chief Innovation Officer. His insights into trending tech are always on the money. Just before he zooms off to shoot a commercial in New York, I crossed his palm with virtual silver and quizzed him about the future.
Maxim, thanks for finding time to speak to us. Ok, what’s in the hopper right now? What’s piqued your interest in the world of filmmaking?
“Last year 4K was the buzz. This year VR is what’s exciting me. I think this is huge. It’s no accident that Facebook spent something like $2b on Oculus Rift. We’re about to enter a new era of immersive gaming and filmmaking. There are already a number of people working on fully surround video capture systems. The enormous challenge will be in placing the camera and crew – as there’s nowhere to hide any more. As cameras get smaller and lighter and drones get more stable, I foresee increasingly autonomous drone camera operators doing much more than just following a pre-determined path. Instead, I see them having their own opinions about framing shots, with key subjects and the shot path being the only things set in advance.
Our task as filmmakers is to give audiences experiences. I think people forget this sometimes. The less we remind the audience that they’re watching a film, the more free they are to let go and be carried away by the experience we give them. This rule applies whether we’re talking about colour acuity or honest performance. We naturally want to adjust our viewpoint: People turn down the radio to park, or turn their heads to hear better. People tilt their heads to see architecture more clearly or close their eyes to hear the music. When VR gives us retina displays and 7.2 surround in a headset, we’ll see a dramatic transformation in the experiences we can provide.
In fact, we’re moving towards a time when brain activity can be read AND written, sidestepping the exterior senses completely. This is enormously powerful and, like fire, could be used for incredible good or terrible ill. We’ll have to see whether the people wielding it are noble enough to be trusted with it. Still, imagine choosing to experience standing on the highest peak of a mountain in Tibet. You can hear the flags fluttering in the wind and, some distance away, livestock waiting to be fed. There’s a bell chiming, far away, and you make a mental a note to explore further up the path when you have time. A voice behind you belongs your your guide, Anita, an autonomous AI tailored to your personality who immediately sets you at ease and invites you to explore with her by your side. You turn to see Anita as naturally as you would if you were standing on the real mountain but, in fact, you’re sitting in a chair with a headset on that disables parts of your motor-neurones to avoid accidentally hurting yourself, while giving feedback to your thalamus that you are, indeed, moving. Lucid dreaming brought to you by technology. But who writes the code? Who shapes the experiences?
I would argue that any film with a sufficiently authentic depiction of reality (any reality) and a character you can empathise with is effectively a real, personal experience for the viewer. This is because the part of the brain that has experiences does not differentiate between real experiences, memories, dreams, or stories that are compellingly depicted. That’s why dreams can genuinely change the way you feel about yourself, someone else, or your life choices.
History is written by the victors, they say, but who will be the victors in the deeply connected worlds of VR, AI and bio-tech? We haven’t spoken much about bio-tech but it will come to be so pervasive, and invasive, that probably it will change our working definition of what it means to be human…”
One area in particular that we often associate with future tech is Holographic projection. Are we any closer to a Princess Leia and R2 ‘Help me Obi Wan’ scenario?
“Ha! Yeah… I was speaking with a very high-level physicist friend about this a few days ago (yep, I have friends I ask for advice from….). We reckoned the only way you could genuinely create holographic projection would be to have a material that floats in the air that is coloured (or can change colour), or a semi-transparent material that floats onto which an image is projected. You just can’t make light stay where it is and emit energy without an enormous investment of energy. We’re talking here, of course, about an object that appears to be in front of you that you can view from any angle as if it were a moving, coloured sculpture. There are already some interesting VR headsets coming along nicely that will give the appearance of holograms – but you have to wear something for it to work. We also have some pretty compelling display mediums, like glass that you can project onto, giving the illusion of objects appearing to float in space (in fact, they used to do this with angled glass in darkened theatres lit by candlelight to create the illusion of ghosts…
Further in the future, I suppose it won’t matter because the whole experience will be projected into your brain… By then though, we won’t have such a clear definition of what it means to be human (ye gads!).
The problem with making particles float in the air in a consistent way is getting them to stay in exactly the right spot. Still, I could see a way of doing this with nano-tech electromagnetic charges, targeting particles of different magnetic ‘weight’ but how fast they’d be able to move… I don’t know. More will be revealed, I suppose, as we begin to unlock gravity.
Coincidentally, I was disagreeing with my physicist friend about the standard model for gravity (bending space). I proposed what I consider to be a much simpler model and he agreed I might be right – what a laugh if one day it turned out I was right :).”
We are witnessing an unprecedented ramping up of product development cycles. The moving target analogy seems to fit the bill but is this rate of technology development sustainable? Are we going to see a plateau any time soon?
“I suspect these will be increasingly automated, with more assistance from AI operating in virtual environments. They’ll get even faster too, with iterations being applied by AI and some standard testing (that would take hours or days if a human performed them) performed in moments in an accelerated virtual environment. In the shorter term, I think we’ll see the flowering of distributed working methods allowing people in more time zones to collaborate effectively (we’re already seeing this happening now). This means development cycles will be less tied to times of day – with people handing the baton on to another team as their day ends. This won’t work for the kind of activity that requires a single mind to focus over time but it’s great for the development of multiple components – whether that’s in media production or car widgets. The key thing is that we can turn development from large single releases of technology into a stream of steady improvements. To a great extent, Adobe has achieved this with the excellent implementation of their Creative Cloud solution. Still, they are pretty big downloads when the updates come – I’d like to see an ongoing stream of tiny updates where smaller numbers of files are updated iteratively.
This is extremely difficult for human developers to do because they need to go through a regular QA process for each batch of updates. In a virtual environment, with a fast enough machine, new files could be tested in moments and distributed worldwide in minutes – often without users even knowing an update has been applied.
There’s a certain elegance in design that was promoted years ago by the architect Christopher Alexander. His ideas about making ‘life full’ buildings were so clever; his work is often followed by high-level programmers wanting to up their game. Elegance in design is something we see emerging in nature as a product of millions of years of ever-improving evolution. What if we could perform those iterations in a virtual space at an incredibly accelerated rate, only producing physical items (using more advanced 3D printing to produce electronics) when required?”
What about codecs, content delivery pipelines and storage architecture?
“We’re on a mission to re-create reality for our audiences. Interestingly, it looks like audience engagement has little to do with the accuracy of the reproduction. Nonetheless, we want to fake experience, and the big barrier is data rates. If we had fast enough pipelines, we could probably just capture everything uncompressed and display it that way but we don’t have enough storage or data transfer rate. So… we decompress and recompress but it’s a bit of a pain :). H.265 is available but requires such a high amount of processing that many portable devices will struggle with it. It’s very efficient but for now I have it on good authority that H.264 is still the codec du jour.
One thing I see coming is an increase in the commoditisation of content creation. Youtube is now a proper viable channel for media discovery and audience building. The money is terrible unless you make it big but people do, and when they do, they get paid handsomely. Catching onto the benefits of this, YouTube now have studios where successful enough content creators are invited to produce content for free at a much higher standard. I’m not sure how viable this model is, since there are just so many content creators. I also don’t think money is ‘the only fruit’. People like to share stories and perspectives and may not need to earn a living from it.
I’m waiting on 3D crystalline storage – it’s long overdue. I recall seeing a demonstration of a product on Tomorrow’s World (remember that?) many years ago that used crossed laser beams to read and write data into something the size of a sugar cube. It’ll come… soon! Ish!
Very interesting point about film as an archival solution. There’s basically nothing to beat it at the moment. From a production point of view, though film might work well for bigger budget productions, on the more modestly costed films projects it introduces too many overheads and too many unknowns. It’s interesting that people are describing it as a known quantity and, therefore, a safe bet. In fact, not knowing whether the rushes have even worked or not until you get them back from the lab brings quite a lot of uncertainty. Again, fine if you have the budget to hang around a location until you’re happy with what you have, but a problem for smaller productions that may only have access to a location for a few hours.
There are some interesting promising archival technologies I’ve heard of but none of them seem to have come online. I heard talk of using DNA to store data (incredibly long shelf-life and very robust), of holographic three-dimensional storage mediums that ‘cross the beams’ to target binary entries inside a material, holographically layered discs (Blu-ray 2.0, perhaps) and, of course, tape-stream or even regular hard drives that are replaced every three years to escape data loss. I’ve heard of everything from using ink and paper (at very high resolution – because you must admit, books last an awfully long time if they are stored correctly; much longer than film) to spinning gas particles in a chamber that cycles them thousands of times a minute, using the particular angle of the spin to store multi-step data (more subtle than binary and, so, more efficient).
I’ve also heard (years ago) of etching silicon, using the same technique used to create microchips – something like a 1000 year shelf life. Of all of these, I suspect the DNA option is the most interesting. Nature has had an awfully long time to perfect data storage in the form of DNA and if we can find a way to harness the same principles, we may be able to expand our data storage and retrieval (speed of retrieval is a key component in the process that’s often ignored because people focus on longevity.
I understand Wikipedia have an emergency solution that works like this: If the world as we know it is about to end, users can print pages of wikipedia and pages that have yet to be printed are identifiable. Ideally three copies of every page on wikipedia.com will be produced, so there’s a physical version that doesn’t depend upon electricity for access. The great thing about film, of course, is that you can shine any old light through it and look at it with a reasonably low-tech magnifying glass. Of course, if AI takes over the world and treats us like interesting animals in a zoo, all of this may be a moot point :).
We do have systems in place but what I’d like to see is something like a crystal-based storage solution. The great thing about man-made crystals is that they have an enormous amount of regularity at a very small scale. If we can create a crystal that response to light intensity, photon spin or even plain old electromagnetism, we might be able to produce that three-dimensional high-capacity storage medium with a shelf life in excess of 1,000 years.”
One last question that I know is close to your heart. What will be the impact of cross media delivery; gaming/film/sonic material marketed under a single title?
“I believe this is absolutely the future – but only for certain projects. There are lots of novels that are written and that’s that, and lots of plays that never become films. Some kinds of experience are best served in a particular way. However, as we improve our ability to blend mediums, I think we’ll see a concerted effort. For me, Orpheus Rising is particularly interesting because I want to rebuild the passive audience experience as an active one, while retaining the full atmosphere and qualities of the film. Part of what makes this possible is having the cast provide some of the ‘lifefulness’ of the game: We’ll motion capture them, capture their facial expressions and record their voices, making the game not just similar in these respects but genuinely the same. I’m of the view that it’s the movement and the sound that makes an experience authentic, more than the images.”
Thank you Maxim. There’s plenty for us to think about here. It’s always a pleasure to speak to you. I know you are busy but please will you come back and join us again soon for an update?
“Anytime, and the pleasure is all mine.”
If you are very lucky you can catch up with Maxim Jago in person at NAB 2015. Meanwhile check out maxim’s Digital Production Buzz podcast on YouTube with Larry Jordan and Mike Horton, read about his latest projects at www.maximjago.com , watch and learn at Lynda.com and follow @MaximJago
- August 2018
- April 2018
- March 2018
- January 2018
- December 2017
- November 2017
- September 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- November 2016
- June 2016
- April 2016
- September 2015
- June 2015
- April 2015
- March 2015
- February 2015
- December 2014
- September 2014
- August 2014
- May 2014
- April 2014
- February 2014
- December 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013