skyrocketing use of generative AI by 900% reported by Palo Alto Networks
Palo Alto Networks' latest report, the State of Generative AI 2025, has shed light on the significant cybersecurity challenges that the film and television industry is facing as generative AI (GenAI) becomes an increasingly essential resource.
The report reveals an 890% increase in GenAI traffic throughout 2024, driven by the explosive adoption of AI across industries, including content creation sectors like film and television [1]. This rapid expansion of GenAI usage presents a growing attack surface, with organizations managing an average of 66 GenAI applications, 10% of which are classified as high-risk due to lax security practices or vulnerabilities [1][2].
The fast pace of GenAI integration often outstrips companies' ability to implement appropriate security frameworks, leading to data risks such as leakage, misuse, or unauthorized access to sensitive intellectual property—a critical concern in creative industries relying on proprietary content and confidential scripts [2]. The report warns about emerging threats linked to open-source large language models that could be exploited by attackers due to less regulated deployment, elevating risk in creative workflows which may incorporate such technologies [2].
A lack of governance and weak controls over AI-generated content and data flows also heightens the chances of malware injection, supply-chain compromises, or deepfake manipulations, which can severely impact brand trust and copyright integrity in film and television production [1][4].
To mitigate these risks, Palo Alto Networks recommends implementing robust AI security ecosystems, such as their Prisma AIRS platform, that secure all components of AI infrastructure, including models, data, applications, and autonomous AI agents [4].
In summary, the main cybersecurity challenges for the film and television industry with widespread GenAI adoption include the rapidly expanding and insufficiently controlled AI attack surface, data exposure risks, high-risk application vulnerabilities, and new threat vectors from emerging AI models and unsanctioned usage, which collectively demand stronger AI governance and security measures [1][2][4].
The research, conducted by analyzing GenAI traffic logs from 7,051 customers worldwide throughout 2024 and anonymized DLP data from January to March 2025, emphasizes the need for a thoughtful approach to integrating GenAI. The company collected and reviewed all data in accordance with strict privacy and security standards to uphold customer confidentiality.
Palo Alto Networks aims to empower businesses in the film and television entertainment industry with knowledge to navigate the AI landscape safely and leverage its benefits without compromising data security. The report cautions that careless or unauthorized use of GenAI applications can result in leaks of intellectual property, regulatory violations, and exposure of sensitive data. The rapid proliferation of GenAI applications is largely due to the absence of well-defined AI usage policies.
The need to adopt AI for competitive advantage and inadequate security measures leave companies vulnerable to exploitation. The average number of GenAI-related data loss prevention (DLP) incidents has more than doubled, increasing by 2.5 times in early 2025. Palo Alto Networks advocates for a balanced approach to GenAI integration, emphasizing the importance of robust security frameworks. The report emphasizes the crucial balance between AI innovation and security.
- The rapid expansion of artificial-intelligence (AI) usage in industries, including content creation sectors like film and television, is causing a significant cybersecurity challenge, as organizations manage an average of 66 AI applications, 10% of which are high-risk due to lax security practices or vulnerabilities.
- The report warns about emerging threats linked to open-source large language models, which could be exploited by attackers due to less regulated deployment, elevating risk in creative workflows that may incorporate such technologies, highlighting the need for stronger artificial-intelligence governance and security measures.