Video, the final frontier?

From email to spreadsheets to corporate insta reels, AI has provided new ways of working and new ideas about how we think about our work and pleasure.

How we think when we use so-called thinking machines can provide a framework for growth and innovation. Most of us are past the fearmongering and negative media reports on AI and need to get on with it!

Typically, the West has relied on science and rational thinking to develop these AI tools. Where we go next will require some new processes. Some commentators say we’re moving into Software 2.0, where we describe a goal that we want to achieve and train a model to accomplish it. Rather than having a human write instruction for the computer to follow, training works by searching through a space of possible programs until we find one that works. In Software 2.0, problems of science—which concern formal theories and rules—become problems of engineering, which concern accomplishing an outcome.

A phrase that has been popping up in the many discussions on AI regulation is the need for explanation and prediction. Generally, we in the West have searched for meaning and understanding via explanations: “How did it grow? How does it make money? Who are we?” etc.

However, now the emphasis is on ditching the search for explanations (and control) and coming up with accurate predictions. The pursuit of predictions over explanations turns science problems into engineering problems. The question becomes not, “What is it?” but instead, “How do I build something that predicts it?”

Given this new perspective, I have been researching one of the last art forms to be transformed by AI: Filmmaking/video generation. Filmmaking is a massive and complex industry that requires extensive collaboration and creative effort and is widely popular. Most people can recite famous lines or scenes and, if pushed, name their top 10 films, no matter the screen size.

Recent advancements in text-to-video AI generation have significantly enhanced the ability to create engaging video content from written scripts. Here’s my overview of the latest developments and the top six platforms leading the market.

Realistic AI Avatars: Many platforms now offer highly realistic AI avatars that can deliver scripts in multiple languages, making videos more relatable and engaging for diverse audiences. These avatars can mimic human expressions and gestures, improving viewer connection.

Multilingual Capabilities: Enhanced multilingual support allows creators to reach global audiences without language barriers. This includes automatic translation of scripts and voiceovers in various languages.

Customisation and Personalisation: Users can create custom avatars or clone their voices, adding a personal touch to videos. This feature is handy for brands wanting to maintain a consistent voice across their content.

Integration with Other Tools: Many platforms now offer robust API integrations, allowing seamless connections with other applications for automated video generation, which is crucial for scaling content production.

User-Friendly Interfaces: Advances in user interface design have made these tools more accessible, enabling users with minimal technical skills to produce professional-quality videos quickly.

Before mentioning the top 6 platforms, I must delve into the most recent development from OpenAI, Sora. Sam Altman has changed the video data crunching method and developed a sophisticated cinematic tool.

However, its current availability is limited to select users for testing purposes, with broader access anticipated in the future. The best example of this AI tool is to hop over to your favourite video platform and type “Sora” in the search bar. It’s impressive, and the comments are hilarious.

Another AI video I like is from US filmmaker Dave Clark. He is a traditional filmmaker who has started to make AI-generated videos. He recently produced a sci-fi short called Borrowing Time, inspired by his father’s experiences as a Black man in the 1960s. He created it entirely using Midjourney and Runway to generate images and videos. He narrated the movie himself and used ElevenLabs to turn his voice acting into the voices of different characters.

Borrowing Time went viral, and Dave is on the record saying he wouldn’t have been able to make it without AI. It would’ve been impossible to get a sci-fi short like his funded by a traditional Hollywood studio.” But now that it’s out and popular, he says he’s fielding interest from top-tier Hollywood studios who want to make it into a full-length movie.

Top 6 Platforms for Text-to-Video AI Generation

Runway: From complex physics-based simulations to hyper-realistic renders, ai tools allow you to generate production-ready assets with speed, control and fidelity. Free version available; paid plans start at $12/month

Synthesia: Realistic AI avatars, support 140+ languages, customisable avatars. Starts at $22/month.

DeepBrain: Lifelike avatars, scene-specific editing, no watermarking. Starting at $30/month

Lumen5: Ideal for social media content, URL-to-video feature, and extensive media library. Free version available; paid plans start at $19/month

InVideo: Template-driven video creation, supports 21 languages for voiceover. Free version available; paid plans start at $15/month

Elai: Voice cloning in 28 languages, URL-to-video capability. Pricing varies based on usage. 

These platforms represent the forefront of text-to-video AI technology, offering diverse features that cater to various needs in digital content creation. These are exciting times, and these powerful tools are already in a browser on your laptop - what are you waiting for?

 

The Art of Rebranding: Navigating Change for Business Growth