Advanced Production Pipelines
Over the past two years, we have been conducting R&D into creating a high fidelity, sustainable practice-based advanced technology production pipeline for film,
games, XR & immersive media using Radiance Fields, Gaussians and AI Augmentation and Optimisation techniques. We are just at the beginning of a long Innovation pathway to realise controllable, scalable, sustainable and fully iterative methodology for media production which realises the time & cost efficiencies that genAI has promised, but has yet to deliver. We are ensuring human-centric media practice-based direction, acting, choreography, camera and lighting based contextual capture is at the core of this advanced production pipeline, whilst enabling full iteration & elaboration of all captured assets in a true 3D virtual production pipeline, post capture, without all the cost, resource and energy inefficiencies of large LED Volumes. Our intention through the R&D is to democratise a highly sustainable, highly flexible and highly iterative production methodology that is cost and time efficient, scalable for any production and format and supports existing and new forms of media practice, whilst maximising emotive storytelling through contextual post capture performance, environment, camera framing/ movement/lighting iteration to maximise narrative intent and design, yet is truly time, cost, energy & resource efficient.
MAUD SAGA - HEIRS TO A BROKEN AGE ADVANCED MEDIA PRODUCTION
MAUD - HEIRS TO A BROKEN AGE PROLOGUE TEASER
The Maud Saga Prologue teaser was made as a collaboration between the Maud Saga author L.L. Junior, VFX designer, supervisor and producer Carl Grinter and animator, compositor, AI specialist and music composer and sound designer Brian King. The team scripted, storyboarded, developed the Maud World and characters from L.L. Juniors descriptions and The Maud Saga Book I of VII novel “Heirs of a Broken Age”, early AI tests and character development and iterations. The animation has been created using 3D models that are used as guides to inform the generative AI models and character actions to keep consistency between characters, environments and lighting and guiding the generative AI to automate the actions in each the scenes, giving the ability to direct and design the scenes with a high level of detail and coherence.
