The recent release of generative Artificial Intelligence (AI) tools, like ChatGPT and Bard, has unleashed a wave of speculation on the future direction of these new technologies and how they will affect the workforce. With the stakes high but with large uncertainties as to how these technologies will evolve, there is a broad spectrum of viewpoints on their impact on future jobs.
Some commentators see AI as replacing large swathes (if not all) of human beings’ work activities as machine predictions become more accurate, productive and cheap. Others see the fundamental algorithmic nature of the machine-based learning that lies behind AI as a serious limitation to its effectiveness in decision-making, since it is devoid of the human insight, compassion and emotions that are often critical parts of optimal decision inputs and processes.
From our perspective neither of these views is likely to prevail. At their core, AI technologies are just tools, providing predictions based on pattern recognition within the historical data fed into them. In the proper context, these tools have the potential to be very powerful, saving time across a myriad of current business and other processes.
Already advancements in AI have empowered systems to formulate preliminary project blueprints, diagnose potential cancers through prior X-ray analysis, and execute strategic gameplay based on predetermined rules. Looking ahead, the horizon of possibilities is expansive, encompassing the development of personalised learning in education, predictive mental health interventions, smart urban infrastructure, among many other uses.
As they have for more than a century, machines and humans will continue to work together, with the relationship evolving over time as they each undertake whatever they are best capable of doing.
However, these technologies come with constraints. Their predictions can be misleading if they are based on historical bias and partial training sets, and can be programmed to spread information that is deliberately deceptive. Moreover, as for all machines, AI has no consciousness (and never will) of the answers generated, including how an appropriate response can vary by context.
As a consequence, many of us see humans remaining as a very significant part of the productive process when AI technologies are used, albeit with new jobs created as AI undertakes some of the roles currently done within existing jobs.
As they have for more than a century, machines and humans will continue to work together, with the relationship evolving over time as they each undertake whatever they are best capable of doing. One can envision an approximate division of labour where machines initially do the routine, time-consuming tasks based on past data and established processes – making a first pass as synthesising past information, devising lesson plans and marking student tests, drafting computer coding, individually tailored medical plans and so forth – and the future ‘knowledge workers’ then overseeing this machine-based output, ready to amend and correct it as appropriate to the context.
AI Oversight the new job of the future
This new oversight role for humans will be critical. In practice, we are already seeing a low tolerance for mistakes in machine output, even when their error rates are far better than those made by humans. Witness the attention paid to ChatGPT dialogues that go off track, the (few) autonomous vehicles that crash, and the focus on AI-fabricated output (termed ‘hallucinations’).
Companies that take at face-value AI-generated predictions and output as authoritative will be very vulnerable to negative sentiment from their customers. For this reason, a new employment category – ‘AI Oversight’ – could be among the fastest growing in the coming decades. The emergence of an AI Oversight role in the workplace is poised to create a multitude of employment opportunities, focusing on monitoring, guiding and ensuring that AI systems function as intended while adhering to ethical, legal and societal standards.
We are still a long way from adequately preparing for this future human role. We will need new training in schools, universities and TAFE courses so students can become familiar with using emerging AI technologies. There is a need to uplift maths skills so workers can better understand the strengths and challenges of the machines’ probability-based predictions. And a need to strengthen the capabilities for which humans will remain the most adept within decision-making activities – critical thinking and social skills, teamwork, communications and so forth. There is a sense that we are only at the beginning of a long journey to consider how these human skills and capabilities will successfully work with – rather than against – the new machines.
There is also a broader challenge in allocating future work between the AI-enabled machines and humans. Much of the areas where we will want humans to remain firmly in the driver’s seat – tasks with a heavy focus on nuance, context, cost-benefit trade-offs, inclusiveness, and so forth – are currently learned by decision-makers through extensive prior experience in dealing with precisely the aspects that are now being passed to the machines. While it is very early days, society has not yet thought through how (and whether) humans can, in fact, make effective higher-order decisions if they have not previously grappled with the more standard operational and often repetitive aspects that underpin such decision-making.
- Problem screen use hits attention and higher-level thinking: new paper
- Ramses: golden treasures of the superstar pharaoh come to Sydney
For instance, some see cars and other vehicles of the future being driven by the new machines using GPS and other software but with humans ready to take charge of the vehicle at the moment that an unusual and high-risk situation occurs. But how will a human serving as the AI Oversight gain that intuitive experience in the absence of having completed thousands of hours of routine driving? Or how will humans in the future provide journalistic and other insights into economic and political events, debates and controversies, and/or new fiction and music if the basic summary processes by which this has historically been learned become the main purview of the new machines’ output?
No one yet has answers to these tough questions. But there seems no doubt that we are on the cusp of a major change in the way the AI machines interact with humans in the workplace. The sooner we start to think through and prepare for that interaction and its complexities, the better.
David Orsmond is a Professor of Economics and the Director of Policy and Communications at the Centre for Applied Artificial Intelligence at Macquarie University.
Amin Beheshti is a Professor of Data Science and the Director of the Centre for Applied Artificial Intelligence at Macquarie University.