Article on AI from SHOTS.NET

Fresh from creating a nightmare-inducing experiment in the form of Synthetic Summer, Private Island director/co-founder Chris Boyle dissects the explosive impact of the AI tools at our disposal nowadays, and what understanding it now will mean for the future.


Artificial Intelligence is so hot right now, but like the return of the 90s brow, there are a lot of conflicting spicy takes surrounding it.

Is it just snake oil? Another tech phase that will fizzle out like the Metaverse? Or are we a bunch of Neanderthals with front-row seats to our own obsolescence?

Gulp. Maybe it's both.

It's ‘complicated’ because, like a lot of tech buzzwords, it's tricky to define what we mean when we say AI. Is Google's search AI? What about Photoshop's magic wand or a Tesla's parking? Or is it Deep fakes? ChatGPT? Midjourney? Drake x The Weeknd?

Well, the simplest answer is they all are. Because AI in 2023 isn't a single thing - it's a bustling ecosystem of companies leveraging machine learning to create tools that advance tech in a whole host of areas. What was once the preserve of academia and Big Tech has been disrupted by smaller startups and a boatload of people in their bedrooms and offices developing these systems to do all sorts of mind-bending things; not least making what’s been roundly called NIGHTMARE FUEL.

Above: An example of 'nightmare fuel'; Boyle's hilarious/horrifying Synthetic Summer experiment, entirely generated via text to video.

The timeline of a scientific paper being published to running it on your PC has dropped from years to minutes, and the arrival of ChatGPT has brought the conversation into every boardroom and LinkedIn post across the planet. (Rest assured, dear reader, this article is written by my own hand - hece the typos...)

This renAIssance (sorry) is accelerating so fast, it's freaking people out. The recent call for a pause on all AI development for six months seems unlikely, especially when leaders like Elon Musk, a key signatory, are also buying tens of thousands of GPUs for his own AI start-up.

Above: Infinite Diversity in Infinite Combinations, a short from Private Island, released earlier in the year, that used neural networks to write, visualise and voice a discussion on diversity in the workplace, with chilling results.

For sure, challenges exist. Much of the tech is paywalled, and the majority is controlled by a few companies that may well monopolise the industry. More problematically, issues with race and gender bias are endemic, as language models inherit our own bias. We made a short film, Infinite Diversity in Infinite Combinations, earlier in the year to highlight this, and you don’t have to dig deep to find uncomfortable truths.

Issues on ownership and copyright persist. Hopefully, with a greater understanding of how these models use data in their training process, along with a new generation trained on their own stock like Adobe's Firefly, this specific issue is on its way to getting resolved.

Beyond all of this - we should maybe not miss the wood for the trees - societal changes are inbound, and as we continue to work with AI, it will take on more responsibilities. This ain't NFTs; it’s real, and it's happening right now.

Perhaps the most crucial issue to understand is that, for the moment, AI isn't intelligent in any real way. We're dealing with something that has extremely limited reasoning. They are tools that are trained by humans to present as human. So for the foreseeable, a person operating these systems isn't just important, it’s essential.

So - zooming out from the singularity - what's the deal for animators and filmmakers?

Well, on Private Island, since COVID, we've been sneaking machine learning into what we do. From dragons for ITV to digital ageing for Ecover to even deep faking for MOMA, it’s becoming integral to our daily workflow.

Above: Machine learning used to create an eerie piece for a Gillian Wearing exhibition.

It's been an exciting, if occasionally frustrating, process. A madcap collection of programmes, coding, patches, calls to IT, emails to customer services, a ton of loading bars, and an embarrassing amount of technical and emotional crashes.

Nevertheless, we’ve found it helps us do what we do already, but better and faster; be it mechanical, like rotoscoping, or creative, like ideation for pitching or even generating original assets.

Is it worth it? Well, the rewards are there; shots that would have seemed like a fever dream a couple of years ago are increasingly attainable. We work a lot with archive, and our new gadgets allow genuinely innovative techniques to extract 3D models from stills or precise mocap data from the footage, which in turn allows for new and exciting options in an edit.

Above: Extracting 3D models from 2D images - a task made easier through AI gadgets.

More than that, we like the look of it. It's often a mesmerizing combination of old and new. Melting and smashing disparate mediums together with technology is part of what we do. It's fun and weird. Like us. Often we'll work with a variety of aesthetics in a single project, so these techniques add more to the mix. Same skills; different tools.

For the moment, it's not about generative image or synthetic video replacing the pillars of shot live action, archive, or animation, but instead either operating as a fourth discipline or enhancing one of the other three.

In time, these will all meld tighter, with animation, AI, and stock all blurring. But the innate creativity of performance will mean live-action isn't going anywhere soon.

Above: Fred Does the Robot, an experiment of extracting mocap data from footage.

AI is expanding our horizons and making us feel like kids in a candy store because it's novel, but in truth, it is just another utility to be used selectively when it makes sense to do so.

As developments continue to progress at a dizzying pace, the future remains uncertain. I wrote a version of this article before Christmas and then again last month, and both aged like milk, but I’d bet a slice of the balmy Private Island real estate that the impending AI wave will be transformative for the visual arts. Good and bad, it’s happening, whether we're ready for it or not.

2022 marked a revolution in static generative art, and 2023 appears poised for the emergence of synthetic videos as a bonafide creative tool. For sure, it’s currently deeply uncanny, but that’s where we were this time last year with stills, which are now near indistinguishable from the real deal.

With hindsight, maybe we’ll look back on this period as the calm before the storm, but certainly, the time to get involved is now.

In the words of William Gibson: "The future is already here. It's just not evenly distributed yet."

Full Article Here: https://www.shots.net/news/view/weird-science-mixed-media-in-the-age-of-ai

 
Previous
Previous

Editorial Article from LBBOnline

Next
Next

Interview on AI by ONEPOINTFOUR.CO