The Times Australia
Fisher and Paykel Appliances
The Times World News

.

AI might be seemingly everywhere, but there are still plenty of things it can't do – for now

  • Written by Marcel Scharth, Lecturer in Business Analytics, University of Sydney
AI might be seemingly everywhere, but there are still plenty of things it can't do – for now

These days, we don’t have to wait long until the next breakthrough in artificial intelligence (AI) impresses everyone with capabilities that previously belonged only in science fiction.

In 2022, AI art generation tools[1] such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.

Unlike previous developments, these text-to-image tools quickly found their way from research labs to mainstream culture[2], leading to viral phenomena such as the “Magic Avatar” feature in the Lensa AI app, which creates stylised images of its users.

Read more: No, the Lensa AI app technically isn’t stealing artists' work – but it will majorly shake up the art world[3]

In December, a chatbot called ChatGPT stunned users with its writing skills[4], leading to predictions the technology will soon be able to pass professional exams[5]. ChatGPT reportedly gained one million users in less than a week. Some school officials have already banned it[6] for fear students would use it to write essays. Microsoft is reportedly[7] planning to incorporate ChatGPT into its Bing web search and Office products later this year.

What does the unrelenting progress in AI mean for the near future? And is AI likely to threaten certain jobs in the following years?

Despite these impressive recent AI achievements, we need to recognise there are still significant limitations to what AI systems can do.

AI excels at pattern recognition

Recent advances in AI rely predominantly on machine learning algorithms that discern complex patterns and relationships from vast amounts of data. This training is then used for tasks like prediction and data generation.

The development of current AI technology relies on optimising predictive power, even if the goal is to generate new output.

Read more: Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know[8]

For example, GPT-3, the language model behind ChatGPT, was trained to predict what follows a piece of text. GPT-3 then leverages this predictive ability to continue an input text given by the user.

“Generative AIs” such as ChatGPT and DALL-E 2 have sparked much debate[9] about whether AI can be genuinely creative and even rival humans in this regard. However, human creativity draws not only on past data but also on experimentation and the full range of human experience.

Cause and effect

Many important problems require predicting the effects of our actions in complex, uncertain, and constantly changing environments. By doing this, we can choose the sequence of actions most likely to achieve our goals.

But algorithms cannot learn[10] about causes and effects from data alone. Purely data-driven machine learning can only find correlations.

To understand why this is a problem for AI, we can contrast the problems of diagnosing a medical condition versus choosing a treatment.

Machine learning models are often helpful for finding abnormalities in medical images – this is a pattern recognition problem. We don’t need to worry about causality because abnormalities are already either present or not.

But choosing the best treatment for a diagnosis is a fundamentally different problem. Here, the goal is to influence the outcome, not just recognise a pattern. To determine the effectiveness of a treatment, medical researchers run randomised controlled trials. This way, they can try to control any factors that might affect the treatment.

The confusion between these two types of problems sometimes leads to suboptimal applications[11] of machine learning in organisations.

While the success of recent work in AI demonstrates the value of data-driven models, many of the problems we would want computers to solve require an understanding of causation.

Current AI systems lack this ability, apart from specialised applications such as board games[12].

A colourful pixellated image with vague outlines of several human eyes and mouths
A Stable Diffusion artwork generated by the prompt ‘The limits of artificial intelligence’. Current AI technology tends to perform poorly in unexpected situations. In this case, the prompt is not well represented in Stable Diffusion’s training data. Stable Diffusion

Common sense reasoning

Language models such GPT-3 and ChatGPT can successfully solve some tasks requiring common-sense reasoning.

However, the following interaction with ChatGPT, adapted from an experiment by Gary Marcus[13] suggests it is not entirely reliable in this respect.

I’m eight years old. When I was born, my biological mother was in Barcelona and my father was in Tokyo. Where was I born? Think through this step by step.

It is not mentioned where you were born. All we know is that your biological mother was in Barcelona and your father was in Tokyo at the time of your birth.

Whether AI systems such as ChatGPT can achieve common sense is a subject of lively debate among experts.

Sceptics such as Marcus point out we cannot trust language models to robustly display common sense since they neither have it built into them nor are directly optimised for it. Optimists argue that while current systems are imperfect, common sense may spontaneously emerge[14] in sufficiently advanced language models.

Human values

Whenever groundbreaking AI systems are released, news articles and social media posts documenting racist[15], sexist[16], and other types of biased[17] and harmful behaviours[18] inevitably follow.

This flaw is inherent to current AI systems, which are bound to be a reflection of their data. Human values such as truth and fairness are not fundamentally built into the algorithms – that’s something researchers don’t yet know how to do.

While researchers are learning the lessons[19] from past episodes and making progress[20] in addressing bias, the field of AI still has a long way to go[21] to robustly align AI systems with human values and preferences.

References

  1. ^ AI art generation tools (theconversation.com)
  2. ^ mainstream culture (www.vox.com)
  3. ^ No, the Lensa AI app technically isn’t stealing artists' work – but it will majorly shake up the art world (theconversation.com)
  4. ^ writing skills (theconversation.com)
  5. ^ pass professional exams (papers.ssrn.com)
  6. ^ banned it (www.abc.net.au)
  7. ^ reportedly (www.theguardian.com)
  8. ^ Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know (theconversation.com)
  9. ^ much debate (www.theguardian.com)
  10. ^ algorithms cannot learn (www.theatlantic.com)
  11. ^ suboptimal applications (journals.sagepub.com)
  12. ^ board games (theconversation.com)
  13. ^ Gary Marcus (cs.nyu.edu)
  14. ^ spontaneously emerge (yaofu.notion.site)
  15. ^ racist (theintercept.com)
  16. ^ sexist (theconversation.com)
  17. ^ biased (www.polygon.com)
  18. ^ harmful behaviours (medium.com)
  19. ^ learning the lessons (openai.com)
  20. ^ making progress (openai.com)
  21. ^ long way to go (humancompatible.ai)

Read more https://theconversation.com/ai-might-be-seemingly-everywhere-but-there-are-still-plenty-of-things-it-cant-do-for-now-197050

Times Magazine

This Christmas, Give the Navman Gift That Never Stops Giving – Safety

Protect your loved one’s drives with a Navman Dash Cam.  This Christmas don’t just give – prote...

Yoto now available in Kmart and The Memo, bringing screen-free storytelling to Australian families

Yoto, the kids’ audio platform inspiring creativity and imagination around the world, has launched i...

Kool Car Hire

Turn Your Four-Wheeled Showstopper into Profit (and Stardom) Have you ever found yourself stand...

EV ‘charging deserts’ in regional Australia are slowing the shift to clean transport

If you live in a big city, finding a charger for your electric vehicle (EV) isn’t hard. But driv...

How to Reduce Eye Strain When Using an Extra Screen

Many professionals say two screens are better than one. And they're not wrong! A second screen mak...

Is AI really coming for our jobs and wages? Past predictions of a ‘robot apocalypse’ offer some clues

The robots were taking our jobs – or so we were told over a decade ago. The same warnings are ...

The Times Features

What’s been happening on the Australian stock market today

What moved, why it moved and what to watch going forward. 📉 Market overview The benchmark S&am...

The NDIS shifts almost $27m a year in mental health costs alone, our new study suggests

The National Disability Insurance Scheme (NDIS) was set up in 2013[1] to help Australians with...

Why Australia Is Ditching “Gym Hop Culture” — And Choosing Fitstop Instead

As Australians rethink what fitness actually means going into the new year, a clear shift is emergin...

Everyday Radiance: Bevilles’ Timeless Take on Versatile Jewellery

There’s an undeniable magic in contrast — the way gold catches the light while silver cools it down...

From The Stage to Spotify, Stanhope singer Alyssa Delpopolo Reveals Her Meteoric Rise

When local singer Alyssa Delpopolo was crowned winner of The Voice last week, the cheers were louder...

How healthy are the hundreds of confectionery options and soft drinks

Walk into any big Australian supermarket and the first thing that hits you isn’t the smell of fr...

The Top Six Issues Australians Are Thinking About Today

Australia in 2025 is navigating one of the most unsettled periods in recent memory. Economic pre...

How Net Zero Will Adversely Change How We Live — and Why the Coalition’s Abandonment of That Aspiration Could Be Beneficial

The drive toward net zero emissions by 2050 has become one of the most defining political, socia...

Menulog is closing in Australia. Could food delivery soon cost more?

It’s been a rocky road for Australia’s food delivery sector. Over the past decade, major platfor...