The Times Australia
Google AI
The Times Australia
.

Major survey finds most people use AI regularly at work – but almost half admit to doing so inappropriately

  • Written by Nicole Gillespie, Professor of Management; Chair in Trust, Melbourne Business School

Have you ever used ChatGPT to draft a work email? Perhaps to summarise a report, research a topic or analyse data in a spreadsheet? If so, you certainly aren’t alone.

Artificial intelligence (AI) tools are rapidly transforming the world of work. Released today, our global study[1] of more than 32,000 workers from 47 countries shows that 58% of employees intentionally use AI at work – with a third using it weekly or daily.

Most employees who use it say they’ve gained some real productivity and performance benefits from adopting AI tools.

However, a concerning number are using AI in highly risky ways – such as uploading sensitive information into public tools, relying on AI answers without checking them, and hiding their use of it.

There’s an urgent need for policies, training and governance on responsible use of AI, to ensure it enhances – not undermines – how work is done.

Our research

We surveyed 32,352 employees in 47 countries, covering all global geographical regions[2] and occupational groups[3].

Most employees report performance benefits from AI adoption at work. These include improvements in:

  • efficiency (67%)
  • information access (61%)
  • innovation (59%)
  • work quality (58%).

These findings echo prior research demonstrating AI can drive productivity gains for employees[4] and organisations[5].

We found general-purpose generative AI tools, such as ChatGPT, are by far the most widely used. About 70% of employees rely on free, public tools, rather than AI solutions provided by their employer (42%).

However, almost half the employees we surveyed who use AI say they have done so in ways that could be considered inappropriate (47%) and even more (63%) have seen other employees using AI inappropriately.

Prompt in ChatGPT saying write an email requesting a deadline extension for my project
Most survey respondents who use AI use free, public tools, such as ChatGPT. Tada Images/Shutterstock[6]

Sensitive information

One key concern surrounding AI tools in the workplace is the handling of sensitive company information – such as financial, sales or customer information.

Nearly half (48%) of employees have uploaded sensitive company or customer information into public generative AI tools, and 44% admit to having used AI at work in ways that go against organisational policies.

This aligns with other research[7] showing 27% of content put into AI tools by employees is sensitive.

Check your answer

We found complacent use of AI is also widespread, with 66% of respondents saying they have relied on AI output without evaluating it. It is unsurprising then that a majority (56%) have made mistakes in their work due to AI.

Younger employees (aged 18-34 years) are more likely to engage in inappropriate and complacent use than older employees (aged 35 or older).

This carries serious risks for organisations and employees. Such mistakes have already led to well-documented cases of financial loss[8], reputational damage[9] and privacy breaches[10].

About a third (35%) of employees say the use of AI tools in their workplace has increased privacy and compliance risks.

‘Shadow’ AI use

When employees aren’t transparent about how they use AI, the risks become even more challenging to manage.

We found most employees have avoided revealing when they use AI (61%), presented AI-generated content as their own (55%), and used AI tools without knowing if it is allowed (66%).

This invisible or “shadow AI[11]” use doesn’t just exacerbate risks – it also severely hampers an organisation’s ability to detect, manage and mitigate risks.

A lack of training, guidance and governance appears to be fuelling this complacent use. Despite their prevalence, only a third of employees (34%) say their organisation has a policy guiding the use of generative AI tools, with 6% saying their organisation bans it.

Pressure to adopt AI may also fuel complacent use, with half of employees fearing they will be left behind if they do not.

Spreadsheet data on a laptop screen
Almost half of respondents who use AI said they had uploaded company financial, sales or customer information into public AI tools. Andrey_Popov/Shutterstock[12]

Better literacy and oversight

Collectively, our findings reveal a significant gap in the governance of AI tools and an urgent need for organisations to guide and manage how employees use them in their everyday work. Addressing this will require a proactive and deliberate approach.

Investing in responsible AI training and developing employees’ AI literacy[13] is key. Our modelling shows self-reported AI literacy – including training, knowledge, and efficacy – predicts not only whether employees adopt AI tools but also whether they critically engage with them.

This includes how well they verify the tools’ output, and consider their limitations before making decisions.

Woman instructing man using a computer
Training can improve how people engage with AI tools and critically evaluate their output. PeopleImages.com - Yuri A/Shutterstock[14]

We found AI literacy is also associated with greater trust in AI use at work and more performance benefits from its use.

Despite this, less than half of employees (47%) report having received AI training or related education.

Organisations also need to put in place clear policies, guidelines and guardrails, systems of accountability and oversight, and data privacy and security measures.

There are many resources to help organisations develop robust AI governance systems and support responsible AI use[15].

The right culture

On top of this, it’s crucial to create a psychologically safe[16] work environment, where employees feel comfortable to share how and when they are using AI tools.

The benefits of such a culture go beyond better oversight and risk management. It is also central to developing a culture of shared learning and experimentation[17] that supports responsible diffusion of AI use and innovation.

AI has the potential to improve the way we work. But it takes an AI-literate workforce, robust governance and clear guidance, and a culture that supports safe, transparent and accountable use. Without these elements, AI becomes just another unmanaged liability.

References

  1. ^ global study (mbs.edu)
  2. ^ geographical regions (unstats.un.org)
  3. ^ occupational groups (ilostat.ilo.org)
  4. ^ for employees (academic.oup.com)
  5. ^ organisations (news.st-andrews.ac.uk)
  6. ^ Tada Images/Shutterstock (www.shutterstock.com)
  7. ^ research (www.cyberhaven.com)
  8. ^ financial loss (www.9news.com.au)
  9. ^ reputational damage (edition.cnn.com)
  10. ^ privacy breaches (ovic.vic.gov.au)
  11. ^ shadow AI (www.ibm.com)
  12. ^ Andrey_Popov/Shutterstock (www.shutterstock.com)
  13. ^ AI literacy (www.digitaleducationcouncil.com)
  14. ^ PeopleImages.com - Yuri A/Shutterstock (www.shutterstock.com)
  15. ^ responsible AI use (oecd.ai)
  16. ^ psychologically safe (journals.sagepub.com)
  17. ^ experimentation (doi.org)

Read more https://theconversation.com/major-survey-finds-most-people-use-ai-regularly-at-work-but-almost-half-admit-to-doing-so-inappropriately-255405

Subcategories

Scammers won’t take a break over Christmas. Here’s how to make a plan with your family to stay safe

With Christmas just around the corner, it can be a very busy and stressful time of year. Between festive gathe...

Times Magazine

Freak Weather Spikes ‘Allergic Disease’ and Eczema As Temperatures Dip

“Allergic disease” and eczema cases are spiking due to the current freak weather as the Bureau o...

IPECS Phone System in 2026: The Future of Smart Business Communication

By 2026, business communication is no longer just about making and receiving calls. It’s about speed...

With Nvidia’s second-best AI chips headed for China, the US shifts priorities from security to trade

This week, US President Donald Trump approved previously banned exports[1] of Nvidia’s powerful ...

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

The Times Features

The Evolution of Retail: From Bricks and Mortar to Online — What’s Next?

Retail has always been a mirror of society. As populations grew, cities formed, technology advan...

How hot is too hot? Here’s what to consider when exercising in the heat

If you like to exercise outdoors, summer gives you more chance to catch the daylight. It’s often...

Vendor Advocacy Fees

Vendor advocacy fees can vary widely based on a number of factors, including the type of service...

MYA Cosmetics launches in Australia with bold new collection designed for creative tweens

MYA Cosmetics has officially launched in Australia, introducing its 2026 collection featuring th...

How smart home materials can shield us from extreme heat and cut energy bills all year

Australia is getting hotter. Climate change is driving more frequent and prolonged extreme heatw...

What is autistic burnout? And what can you do about it?

Many autistic people face challenges in their daily life while navigating a world made for neuro...

What is ‘oatzempic’? Does it actually work for weight loss?

If you’ve spent any time on TikTok or Instagram lately, you may have seen people blending oats...

Freak Weather Spikes ‘Allergic Disease’ and Eczema As Temperatures Dip

“Allergic disease” and eczema cases are spiking due to the current freak weather as the Bureau o...

The Man Behind Sydney’s New Year’s Eve Midnight Moment: Jono Ma

When the clock strikes midnight on New Year’s Eve, Sydney will ring in 2026 powered by a high-volt...