Google AI
The Times Australia
The Times World News

.

Who will write the rules for AI? How nations are racing to regulate artificial intelligence

  • Written by Fan Yang, Research fellow at Melbourne Law School, the University of Melbourne and the ARC Centre of Excellence for Automated Decision-Making and Society., The University of Melbourne

Artificial intelligence (AI) is a label that can cover a huge range of activities related to machines undertaking tasks with or without human intervention. Our understanding of AI technologies is largely shaped by where we encounter them, from facial recognition tools and chatbots to photo editing software and self-driving cars.

If you think of AI you might think of tech companies, from existing giants such as Google, Meta, Alibaba and Baidu, to new players such as OpenAI, Anthropic and others. Less visible are the world’s governments, which are shaping the landscape of rules in which AI systems will operate.

Since 2016, tech-savvy regions and nations across Europe, Asia-Pacific and North America have been establishing regulations targeting AI technologies[1]. (Australia is lagging behind[2], still currently investigating the possibility of such rules.)

Currently, there are more than 1,600 AI policies and strategies[3] globally. The European Union, China, the United States and the United Kingdom have emerged as pivotal figures in shaping the development and governance of AI in the global landscape[4].

Ramping up AI regulations

AI regulation efforts began to accelerate in April 2021, when the EU proposed an initial framework for regulations called the AI Act[5]. These rules aim to set obligations for providers and users, based on various risks associated with different AI technologies.

As the EU AI Act was pending[6], China moved forward with proposing its own AI regulations. In Chinese media, policymakers have discussed a desire to be first movers[7] and offer global leadership in both AI development and governance.

Read more: Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?[8]

Where the EU has taken a comprehensive approach, China has been regulating specific aspects of AI one after another. These have ranged from algorithmic recommendations[9], to deep synthesis[10] or “deepfake” technology and generative AI[11].

China’s full framework for AI governance will be made up of these policies and others yet to come. The iterative process lets regulators build up their bureaucratic know-how[12] and regulatory capacity, and leaves flexibility to implement new legislation in the face of emerging risks.

A ‘wake-up call’

China’s AI regulation may have been a wake-up call to the US. In April, influential lawmaker Chuck Shumer said[13] his country should “not permit China to lead on innovation or write the rules of the road” for AI.

On October 30 2023, the White House issued an executive order[14] on safe, secure and trustworthy AI. The order attempts to address broader issues of equity and civil rights, while also concentrating on specific applications of technology.

Read more: The US just issued the world’s strongest action yet on regulating AI. Here’s what to expect[15]

Alongside the dominant actors, countries with growing IT sectors including Japan, Taiwan, Brazil, Italy, Sri Lanka and India have also sought to implement defensive strategies to mitigate potential risks associated with the pervasive integration of AI.

AI regulations worldwide reflect a race against foreign influence. At the geopolitical scale, the US competes with China economically and militarily. The EU emphasises establishing its own digital sovereignty[16] and striving for independence from the US.

On a domestic level, these regulations can be seen as favouring large incumbent tech companies over emerging challengers. This is because it is often expensive to comply with legislation, requiring resources smaller companies may lack.

Alphabet, Meta and Tesla have supported calls for AI regulation[17]. At the same time, the Alphabet-owned Google[18] has joined Amazon in investing billions in OpenAI’s competitor Anthropic, and Tesla boss Elon Musk’s xAI has just launched its first product, a chatbot called Grok[19].

Shared vision

The EU’s AI Act, China’s AI regulations, and the White House executive order show shared interests between the nations involved. Together, they set the stage for last week’s “Bletchley declaration[20]”, in which 28 countries including the US, UK, China, Australia and several EU members pledged cooperation on AI safety.

Countries or regions see AI as a contributor to their economic development, national security, and international leadership. Despite the recognised risks, all jurisdictions are trying to support AI development and innovation.

Read more: News coverage of artificial intelligence reflects business and government hype — not critical voices[21]

By 2026, worldwide spending on AI-centric systems may pass US$300 billion[22] by one estimate. By 2032, according to a Bloomberg report, the generative AI market alone may be worth US$1.3 trillion[23].

Numbers like these, and talk of perceived benefits from tech companies, national governments, and consultancy firms, tend to dominate media coverage of AI. Critical voices are often sidelined[24].

Competing interests

Beyond economic benefits, countries also look to AI systems for defence, cybersecurity, and military applications.

At the UK’s AI safety summit, international tensions were apparent[25]. While China agreed with the Bletchley declaration made on the summit’s first day, it was excluded from public events on the second day.

One point of disagreement is China’s social credit system[26], which operates with little transparency. The EU’s AI Act regards social scoring systems of this sort as creating unacceptable risk.

The US perceives China’s investments in AI as a threat to US national and economic security[27], particularly in terms of cyberattacks and disinformation campaigns.

These tensions are likely to hinder global collaboration on binding AI regulations.

The limitations of current rules

Existing AI regulations also have significant limitations. For instance, there is no clear, common set of definitions of different kinds of AI technology in current regulations across jurisdictions.

Current legal definitions of AI tend to be very broad, raising concern over how practical they are. This broad scope means regulations cover a wide range of systems which present different risks and may deserve different treatments. Many regulations lack clear definitions for risk, safety, transparency, fairness, and non-discrimination, posing challenges for ensuring precise legal compliance.

Read more: Do we need a new law for AI? Sure – but first we could try enforcing the laws we already have[28]

We are also seeing local jurisdictions launch their own regulations within the national frameworks. These may address specific concerns and help to balance AI regulation and development.

California[29] has introduced two bills to regulate AI in employment. Shanghai[30] has proposed a system for grading, management and supervision of AI development at the municipal level.

However, defining AI technologies narrowly, as China has done, poses a risk that companies will find ways to work around the rules.

Moving forward

Sets of “best practices” for AI governance are emerging from local and national jurisdictions and transnational organisations, with oversight from groups such as the UN’s AI advisory board[31] and the US’s National Institute of Standards and Technology. The existing AI governance frameworks from the UK, the US, the EU, and – to a limited extent – China are likely to be seen as guidance.

Global collaboration will be underpinned by both ethical consensus and more importantly national and geopolitical interests.

References

  1. ^ regulations targeting AI technologies (unicri.it)
  2. ^ lagging behind (www.theguardian.com)
  3. ^ 1,600 AI policies and strategies (oecd.ai)
  4. ^ global landscape (iapp.org)
  5. ^ AI Act (www.europarl.europa.eu)
  6. ^ pending (www.theverge.com)
  7. ^ first movers (36kr.com)
  8. ^ Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this? (theconversation.com)
  9. ^ algorithmic recommendations (www.gov.cn)
  10. ^ deep synthesis (www.gov.cn)
  11. ^ generative AI (www.cac.gov.cn)
  12. ^ bureaucratic know-how (carnegieendowment.org)
  13. ^ Chuck Shumer said (www.reuters.com)
  14. ^ executive order (www.whitehouse.gov)
  15. ^ The US just issued the world’s strongest action yet on regulating AI. Here’s what to expect (theconversation.com)
  16. ^ digital sovereignty (www.weforum.org)
  17. ^ AI regulation (www.reuters.com)
  18. ^ Google (techcrunch.com)
  19. ^ a chatbot called Grok (mashable.com)
  20. ^ Bletchley declaration (www.gov.uk)
  21. ^ News coverage of artificial intelligence reflects business and government hype — not critical voices (theconversation.com)
  22. ^ pass US$300 billion (www.idc.com)
  23. ^ may be worth US$1.3 trillion (www.bloomberg.com)
  24. ^ often sidelined (theconversation.com)
  25. ^ international tensions were apparent (www.reuters.com)
  26. ^ social credit system (www.technologyreview.com)
  27. ^ a threat to US national and economic security (www.nscai.gov)
  28. ^ Do we need a new law for AI? Sure – but first we could try enforcing the laws we already have (theconversation.com)
  29. ^ California (www.shrm.org)
  30. ^ Shanghai (english.scio.gov.cn)
  31. ^ UN’s AI advisory board (press.un.org)

Read more https://theconversation.com/who-will-write-the-rules-for-ai-how-nations-are-racing-to-regulate-artificial-intelligence-216900

Times Magazine

Why Is Professional Porsche Servicing Important for Performance and Longevity?

Owning a Porsche is a symbol of precision engineering, luxury, and high performance. To maintain t...

6 ways your smartwatch is lying to you, according to science

You check your smartwatch after a run. Your fitness score has dropped. You’ve burnt hardly any...

Has the adoption of electric vehicles led to new forms of electricity theft

Why the concern exists Electric vehicles (EVs) like the Tesla Model 3 or Nissan Leaf shift “fue...

Adobe Ushers in a New Era of Creativity with New Creative Agent and Generative AI Innovations in Adobe Firefly

Adobe (Nasdaq: ADBE) — the global technology leader that unleashes creativity, productivity and ...

CRO Tech Stack: A Technical Guide to Conversion Rate Optimization Tools

The fascinating thing is that the value of this website lies in the fact that creating a high-cali...

How Decentralised Applications Are Reshaping Enterprise Software in Australia

Australian businesses are experiencing a quiet revolution in how they manage data, execute agreeme...

The Times Features

Cost of living increases worry Farrer residents

COST OF LIVING ‘CRUNCH’ HITS FARRER HARD, THE NATIONALS HEAR During a visit to Albury this week...

What's On: Two Psychics and a Medium – Australian Tour…

HIT LIVE SHOW TWO PSYCHICS AND A MEDIUM EMBARK ON  AUSTRALIAN TOUR — AND NO TWO NIGHTS WILL BE T...

Before vaccines, diphtheria used to kill hundreds each …

The Northern Territory[1] and Western Australia[2] are experiencing outbreaks of an almost-era...

realestate.com.au attracts the buyer for 9 in 10 listed…

New PropTrack data reveals the impact realestate.com.au has on property sales, with the  platfor...

The Hidden Threat Inside Data Centers: Why Fuel Degrada…

Data centers are designed with one overriding objective: uninterrupted operation. To achieve this...

Holidays: How to Book a Flight — and Protect Your Money…

For decades, booking an overseas holiday was a straightforward transaction: choose your destinat...

Olivia Colman, Kate Box to join an exclusive Live Q…

Fresh out of cinemas, JIMPA - the new film by acclaimed director Sophie Hyde (Good Luck to you, ...

Homemade Food: Cheaper Than Takeaway, Healthier Than Yo…

As the cost of living continues to bite across Australia, households are taking a harder look at...

The Coalition wants NDIS reform to focus on 3 things. H…

The government is expected to announce further changes to the National Disability Insurance Sche...