Has RTE replaced its presenters with AI avatars?

Source: RTE.ie

From the latest headshots on the RTE website and promo posters, it certainly looks as if the hosts have been given some sort of visual firmware update. Is the new look eerily vacant because the retouching artist is using AI filters to achieve this frozen, waxwork quality, or is it the visual trend of the moment?

It reminded me of the closing of Eurovision last weekend, when we were treated to a virtual performance by the ABBAtars, who have that same uncanny valley look about them (but are actually digital avatars). With their Voyage show, our Swedish friends have almost completely sold out 7 shows per week since May 2022. They have sold more than 1.5 million tickets and generated over $150 million in sales without the human versions breaking a sweat. Well, maybe the data centre gets a bit warm.

The visual style is not dissimilar to the Abbatars

A big part of me hopes that this is a trend because AI is part of the zeitgeist and we are not experiencing some permanent shift to computer generated content. It's understandable though. We are excited and impressed by the new. In the 60s, we were inspired by rockets and space travel. The 80s gave us electronic, 8-bit, neon vibes.

Artists, musicians, writers, designers, scientists, and engineers have always used their imagination and creativity to experiment, discover and drive society forward. Culture shapes and is shaped by human expression. When people get bored of the status quo they are inspired to create the new "new"!

Today, OpenAI trains its models on hundreds of years of human innovation and originality. It pulls from this immense canon of work to synthesise text, images and video for paying subscribers. Similarly, in 2004, Google began an ambitious project to scan 25 million books and make them available to the public through the Internet. It argued that it was "fair use", as do the AI companies now.

DALL-E generated images

ChatGTP can provide you with a sonnet about cryptocurrency as if written by Shakespeare. DALL-E can produce images in the style of virtually any artist, and Sora can create realistic, cinematic clips as if by magic. The technology is undoubtedly impressive, and the results often produce something fantastic, surprising, and thought-provoking. As someone who has loved technology all their life, I feel like I should be really excited about this. Instead, I find myself worrying about the answers to questions like:

1. Are we replacing creativity with productivity?

We are being sold on the potential for this technology to allow for greater creativity. I struggle to see the creativity in typing a prompt for the machine to spit out a picture of the pope wearing a puffer jacket as a response.

Algorithms and machine learning are attempting to approximate what the user is looking for, but there was no exploration, learning or growth for the human in this equation. Sure, it's fast, perhaps seemingly magical, but it's not creative.

And we need creativity. Many societal, environmental, and other problems require our focus, attention, and patience, not instantly generated solutions. If we offload our creativity to generative AI for the sake of productivity, we run the risk of losing a huge part of what makes us human. Just ask Apple about its recent Crush campaign for the new iPad Pro and how it apologised to creatives for "missing the mark".

On a personal note, I also find it offensive and dehumanising to think that a friend or colleague would use AI to compose an email and expect me to respond (using my brain). Let me know what you think about that. Do you agree?

Is this creative?

2. Who is benefitting?

There's a paradox around the issue of plagiarism with these models. OpenAI is feeding its models with works by authors, artists, musicians and filmmakers, largely without permission (although it is also paying for royalty based training data). If for example, a generative AI was trained on just one artist, you could reasonably expect that the profit created by that training data to be attributed (at least in part) to the original creator.

But when you train your model on all of the creative output of millions of creators over the span of human history, all of a sudden it becomes impossible to attribute anyone, thus solving the problem for the tech companies who create the models. It almost seems that stealing an idea from one person is a crime, but stealing everyone's ideas is ok.

To add to the injustice, current and future creators will find their roles obsolete. Who wants to spend €10,000's on a production team, writers, directors, actors, set designers, sound technicians, etc. when you could get a 30 second AI generated video in a few minutes for peanuts. No wonder investors and venture capitalists are champing at the bit to fund these companies.

Relating to my own industry, I admit to feeling a certain level of threat from generative AI products. Given the right sample data, a subsequent version of Figma could create an ideal end-to-end customer journey, complete with email notifications, sign-up flow and checkout screens 😰

Companies like Google, Microsoft, Adobe, OpenAI and others have a huge interest in earning many more billions for their shareholders, investors and employees. This new area of AI growth is firmly in their sights as a primary target. It's such a shame that it will come at the cost of the artists and creators that allowed such a product to exist.

3. What about trust and truth?

But my main issue with the application of this technology is its potential to further erode our trust in each other and the institutions that we rely on. In recent years, we have seen how misinformation has caused chaos in elections, hindered our progress in reversing climate change, and resulted in more suffering and loss of life in the Covid pandemic, to name but a few.

As people increasingly turn away from verified news and media to platforms like X and TikTok for their source of news, we leave the fact checking and moderation (or lack thereof) to other giant tech companies.

Quite often as we scroll these feeds, we are forced to stop and take a second look (which is sometimes no accident), to question if what we are seeing is true. Yes, Photoshop has allowed us to do this for years, but it took a human, time and effort to craft a "believable" image.

When content can be AI generated, indistinguishable from reality, and delivered at a global scale in an instant, we might see that AI content will overtake actual content, and that's a serious cause for concern.

Conclusion

The tricky thing about AI, is that it doesn't have to tell the truth. Perhaps that's what adds to the sense of magic. It will often take a stab at something, rather than saying "I don't know", and therein lies the danger. We are increasingly ok with giving control of our decisions and in some cases our lives to these "beta" computer systems. AI is already generating content and driving our cars. Algorithms decide who we follow, what we watch, and who we listen to. I think we need to wait until these systems come out of beta before we hand over the keys to them.

Now, that’s much better

Carlos Garcia

I use Design Thinking to focus teams, build prototypes, and test concepts with real people in 30 days

http://www.carlosgarcia.ie
Next
Next

The future press release