Global AI Regulations: A Tale of Two Frameworks

The AI market is experiencing rapid growth, with hospitality solutions expected to surpass $1.6 billion by 2026.

As businesses eagerly adopt these technologies, global regulators are increasingly concerned about the cybersecurity risks they bring.

The European Union led the charge by implementing a comprehensive and controversial legally-binding AI regulation on 1 August 2024.

In contrast, the UK introduced its AI Code of Practice in May 2024, offering voluntary guidelines to address AI’s unique cybersecurity and data challenges.

In this week’s feature of the VENZA Echo, we explore the differing approaches of the EU’s AI Act and the UK’s AI Cyber Code, examining what lessons can be drawn from their experiences.

The Landscape

AI technologies pose unique challenges and risks to cybersecurity, something we’ve covered in-depth in previous Echo articles.

To function effectively, AI tools require vast amounts of data, which introduces security risks related to how this data is collected, used, and stored. This has spurred international concern about broader risks, with the UK government noting 47% of organisations that use AI do not have specific AI cybersecurity practices or processes in place.

Moreover, threat actors are increasingly using AI tools to execute sophisticated cyberattacks on a large scale. Emerging threats include advanced social engineering tactics and AI-driven generative software that makes cyberattacks more complex and difficult to detect.

To address the risks posed by AI, legislators across the globe have begun considering legal regulations on these technologies.

EU’s AI Act

The European Union’s AI Act, enacted on 1 August 2024, represents the world’s first comprehensive regulatory framework for artificial intelligence. The act categorizes AI systems based on their associated risks, imposing specific requirements on four classifications: unacceptable risk, high risk, limited risk, and minimal risk. It also prohibits outright systems deemed to pose an unacceptable risk, such as those enabling social scoring by governments.

High-risk AI applications, which encompass areas like critical infrastructure, employment processes, and biometric identification, are subjected to stringent requirements. These include rigorous conformity assessments, strict data governance protocols, and mandates for human oversight to mitigate potential harm. AI systems classified under limited or minimal risk are less controlled and encouraged to adhere to voluntary codes of conduct.

Notably, the regulation possesses extraterritorial reach, so it applies to any AI system that affects individuals within the EU, regardless of where it is developed or operated.

While the legislation aims to set a global benchmark for AI governance, it has faced criticism from both technology industry leaders, who argue that it may stifle innovation, and civil rights advocates, who contend that it doesn’t go far enough in protecting individual liberties.

UK’s AI Cybersecurity Code

The UK’s AI Cybersecurity Code, introduced in May 2024, is a voluntary framework that establishes best practices for securing AI systems throughout their lifecycle. It outlines 12 core principles, covering the secure design, development, deployment, and maintenance of AI technologies.

Like the EU’s General Data Protection Regulation (GDPR), the code emphasizes the shared responsibility of key stakeholders—Developers, System Operators, Data Controllers, and End Users—in ensuring AI security. In doing so, it highlights the growing importance of securing the AI supply chain, emphasizing continuous monitoring against the growing threat landscape.

Unlike the EU’s AI Act, the guidelines are currently voluntary. They are not legally binding or compulsory, but rather operate as a normative code of conduct that seeks to shape private sector behavior by establishing shared definitions, baselines, and best practices.

Looking forward, the UK government has expressed the aspiration to use them as a foundation for developing a global technical standard. To foster international collaboration, the UK government initiated an open call for feedback, which concluded on 9 August 2024. They plan to use this feedback to inform their future policy decisions.

Differing Approaches

The EU’s AI Act and the UK’s AI Cybersecurity Code represent fundamentally different approaches to addressing AI’s role in cybersecurity.

The EU’s AI Act is focused on risk management, especially for high-risk AI systems such as critical infrastructure and biometric identification. It mandates rigorous oversight and transparency, requiring detailed disclosures about how AI systems operate, particularly when individual rights or safety are at stake. This stringent regulatory approach seeks to minimize potential harms by imposing strict controls on AI technologies that pose significant risks.

In contrast, the UK’s AI Cybersecurity Code adopts a more collaborative and flexible approach. It emphasizes AI security’s shared responsibility among all stakeholders, including developers and end-users. The UK’s framework is built on voluntary guidelines that promote best practices in secure design, threat modeling, and supply chain security, encouraging organizations to proactively manage AI-related risks.

Together, these frameworks illustrate two different philosophies: the EU’s prescriptive and risk-averse model versus the UK’s emphasis on collective responsibility and flexibility. Each has significant implications for how AI technologies will be governed and secured on a global scale.

Ripple Effect

Global regulations and frameworks on data privacy and protection often set the stage for legislative trends in other countries.

A prime example is the GDPR, a pioneering regulation for data protection and privacy introduced in 2018. Its influence has been widespread, inspiring numerous countries, including the United States, Thailand, Sri Lanka, Pakistan, India, and recently Jamaica, to adopt similar policies. While each nation may add its unique elements, many follow the underlying principles and terminology of GDPR, such as “data processor” and “data controller.”

The EU’s AI Act is already influencing other nations, with countries like Canada, Brazil, and India beginning to pass similar risk-based AI laws.

However, it hasn’t yet become the universal global standard. Notably, Switzerland has aligned more closely with the UK’s approach, focusing on adapting existing laws to accommodate AI, with an emphasis on transparency and data protection rather than adopting a risk-based framework.

Though many countries have been actively debating and proposing legislative ideas for stand-alone AI security regulations, few have formally adopted them, including the United States, Japan, and Australia.

With AI security legislation still in its infancy, it remains uncertain which framework’s principles will ultimately determine the reigning global standard.

Hotelier Impact

As 76% of hotels plan to incorporate AI solutions by 2025, hoteliers must stay ahead of the current regulatory landscape to ensure compliance.

The focus of the EU AI Act appears to be directed toward the developers and providers of AI technologies, placing the onus on them to ensure compliance with regulations.

However, if more comprehensive frameworks like the UK’s AI Cybersecurity Code, which emphasizes collective responsibility, were adopted globally, the regulatory landscape could shift significantly. This would extend the responsibility beyond just developers to include all stakeholders, including hoteliers, impacting how AI technologies are implemented and maintained.

This could potentially lead to new compliance requirements. Thus, it’ll be worth watching to see how this issue evolves. We’ll be sure to keep you updated.

Feeling overwhelmed? Don’t be. As a leader in hospitality data protection, VENZA provides vendor security assessments and privacy management solutions to help hoteliers navigate the evolving global regulatory landscape.

Ready to get started? Contact Sales to discuss signing up for our programs or adding new solutions to your contract.

***

Take VENZA’s free Phishing Test to assess gaps in your human firewall today!

Human Firewall

Training your personnel to recognize and report phishing attempts is essential to protecting your guests and their data. Get started by determining your risk and readiness level using this free tool.

***

Want to stay informed? Subscribe to the free VENZA Echo now. You’ll receive a monthly digest with the highlights of our weekly article series and important product updates and news from VENZA.