Generative Artificial Intelligence or Generative AI is revolutionizing the way we create and consume content. Generative AI is a disruptive powerhouse that can be used to create new content such as text, images, videos, music, and more with a speed that was once beyond imagination. Generative AI is likely to impact many industries, but it is already reshaping the media landscape. Research shows that in 2022, the media and entertainment segment held 34% of the global generative AI market.
The wide range of AI tools available today has given creators the ability to produce content more easily and efficiently. But at the same time, it has led to new challenges around intellectual property and ownership. The use of generative AI in the media and entertainment industry has significant implications for both creators as well as consumers.
How GenAI is impacting the media industry?
Media companies can leverage generative AI to improve their productivity and diversity. There are several use cases of generative AI in the media industry.
Scalability
Recent years have seen a whopping increase in the adoption and consumption of digital media. While the increase in adoption illustrates a positive sign, it also puts an increased responsibility on media houses and creators to churn content at an accelerated pace. Generative AI can help the media and entertainment industry capitalize on these increased media consumption trends. Generative AI offers the potential to revolutionize how creators, anchors, and other content professionals operate by providing foundational ideas, backgrounds, and outlines, to build on.
Accessibility
In addition to scaling content creation, Generative AI also holds the potential to revolutionize the media and entertainment industry’s approach to content localization, making it more accessible and inclusive. Conventionally, media houses have relied on manual translation for local penetration of their content, which can be time consuming and resource intensive. GenAI acts as an efficient and affordable alternative that can significantly increase content accessibility and inclusivity. In turn, this can result in an expanded viewer base that extends beyond the traditionally dominant segments of society.
Translation has always been a tricky problem in technology. Large language models have been showing potential to solve this, by going beyond word-by-word translation to context specific translation. It would need more work to get into a space where the difference between human translated content and LLM translated content would be too small to ignore.
Personalization
Research shows that a majority of content consumers are not happy with the generic content as they look for personalized content. In today’s era of hyper personalization, viewers expect that what they see, where they see it, and how they see it is tailored to their expectations. GenAI can help achieve the same. Leveraging a combination of algorithmic learning, predictive analytics, and user behaviour analysis, GenAI can power contextual messaging, hyper-personalized media and content recommendations as well as tailored storytelling suggestions. However, the test of this would be to ensure that it doesn’t lead to the creation of more personal bubbles of news and views like we have seen in social media.
Digital Twins
Increasingly, the media and entertainment industry is experimenting with digital twins of personalities to create immersive experiences for their audiences. News channels are experimenting with AI news anchors even in India. Such digital anchors can facilitate media delivery 24/7. The real test of such AI anchors would be acceptance by the viewers. We are yet to pass that test.
Challenges of GenAI
While the benefits and use cases of Generative AI in the media and entertainment industry are boundless, we still need to trade caution at the current stage of technology maturity. Some of the known risks and concerns circling the application of Generative AI includes:
Content Integrity
Undoubtedly, generative AI can help create content at scale. However, with deep fakes and other associated threats, there is always a risk of content integrity. The accuracy and authenticity of AI-generated content may be compromised as the LLMs do not guarantee fact check which is a core pillar of any responsible media house. This can invariably lead to further spread of fake news, creating alternate realities for people. The unregulated personal content generation space (personal blogs, vlogs, opinion pieces, etc.), which does not come under the ambit of any regulator, might lead to more deepfakes and lead to content bubbles. This is an area of concern and regulations are warranted in this space at global and national levels.
Legal Risks
The other major challenge comes in the form of IP and copyright infringement. Since generative AI generates or builds content and media based on already available information, chances are that plagiarism and related risks will circle around. Most of the GenAI platforms are riding on freely available content on the Internet to train and build their models. However, this is prone to legal issues and copyright risks.
Many publishers are already putting up legal clauses to stop the unregulated crawlers, which are “stealing” the contents from their websites/portals. Publishers put a lot of effort (and cost) to generate authenticated content for their readers and viewers. The LLMs need to find ways to do only permission based crawling in the coming days. Technical solutions are also being looked at by many publishers to find ways to stop these crawler bots from “stealing” the contents. This space is rife with possible conflicts in the future and GenAI users should apply caution before using any of the contents.
In most societies, the media holds a large influence over the mindset, thoughts and perceptions of its viewers. While traditional media houses put a lot of checks and balances to verify content before putting them up for the readers and viewers, unregulated/non-traditional media can lap up Gen AI to reduce content creation effort. A basic flaw in the algorithm can lead to manipulative and misleading content which can influence viewer perceptions and discretions.
Algorithm Everywhere
Algorithms are at the core of the GenAI engines. However, algorithms can also lead to biases depending on the contents fed to it. A lot of efforts are being put out by the GenAI platforms to remove the bias by introducing a human feedback mechanism. A major test of these platforms would be how they reduce the toxicity of the contents being dished out to users. Probably the human feedback mechanism being deployed by many LLMs would bring sanity to this.
Way Forward: Generative AI in Media
GenAI is still at a very nascent stage. Many LLM providers are in the race of putting up an even better content generation engine, which can help us achieve the goals mentioned above. However, it’s not an easy problem to solve as challenges would come from technical, legal as well as from ethical aspects. Like many other industries, the Media industry would also look forward to more mature models in the coming days while exploring the existing ones to achieve competitive advantages.