Artificial intelligence (AI) technologies took a big step forward last year. In November, the US-based company OpenAI released ChatGPT, an AI model trained on a massive text-based dataset in order to generate human-like responses to requests from a user.
Improvements: ChatGPT could be used to further automate processes in banking, with a virtual assistant providing individualised answers to customers rather than the generic text used at present, the authors say.
ChatGPT’s answers go well beyond standard Wikipedia-like general knowledge material that can be source from the internet. It can provide well-written responses to questions posed by a user that require critical analysis and write them in any format, such as a discussion in 250 words on who the best US President was written as a poem.
It can also answer quantitative and multiple-choice questions across many disciplines. DALL-E, another generative AI, was released just a few months earlier and provides innovative and original artwork based on a user’s specifications of themes and details in a similar impressive manner.
For many, these releases provided for the first time a very visible insight into the extraordinary progress of AI technologies over the past decade. Based on large quantities of historical data and images, over just a few years AI machines have become better and better at predicting outcomes and answering problems in ways that mimic human knowledge and intelligence.
The opportunity for ChatGPT to uncover innovative and efficient ways of doing things seems huge.
Indeed, to many watching ChatGPT produce output in seconds that would otherwise take weeks of synthesising material from the web and other places, it looks like the predictions of science fiction have arrived, such as the HAL-9000 computer from 2001: A Space Oddity.
Many commentators are deeply worried at the potential of this technology to disrupt traditional activities, with speculation on the future of educational assessments and even recent efforts by NSW and Queensland school systems to try to wind back to the past and ban the use of ChatGPT from school browsers.
Hospitals, classrooms and IT
But lost in this immediate reaction is the huge opportunity provided by big technological advances. While by definition new technology disrupts the current ways of undertaking activities, by doing so it provides the means to increase the output of goods and services and/or reduce the cost to produce output and hence the price of goods and services.
Potential: ChatGPT could be used to create draft lesson plans, quizzes and exams that teachers can then improve on.
Indeed, historically technologies such as the invention of computers, electricity, the steam engine and so forth were the central reason for the massive advance in living standards dating back to the Industrial Revolution over 200 years ago.
At the Centre for Applied AI at Macquarie University, we have been working with companies and government agencies across various sectors (banking, healthcare and education for example) to explore how to use these generative AI technologies to streamline current business practices.
The opportunity for ChatGPT to uncover innovative and efficient ways of doing things seems huge. ChatGPT could for instance be used to further automate processes in banking, with a virtual assistant providing individualised answers to customer inquiries rather than just generic text as at present. In healthcare, ChatGPT could help generate routine tasks such as draft patient care plans and discharge summaries based on personalised needs.
In education, ChatGPT can be used to create draft lesson plans, quizzes and exams, and curricula documents that teachers can then improve upon based on their audience. It could also provide individualised feedback to students on their performance and analyse student data to identify areas where they need further support.
The opportunities in the information technology (IT) space are also extensive, such as assisting software engineering teams with code generation and debugging, and conversion of text-based customer emails into formal documents.
Human oversight crucial
This brings us to the real issue to address in managing AI: collectively working out how to best use these technologies to meet our (human) goals. AI machines are essentially just prediction technologies based on prior information. They require human oversight, both to assess the response quality and to decide how to use that input in our decision-making and choose the best next steps.
No one knows the exact form of the future and how the current ways of doing things will be affected by these technologies.
While Chat GPT and other AI technologies are exceptionally powerful in some areas, they come up short in others. For instance, AI technologies do not predict well in situations where there is little historical data to draw on. They make predictions devoid of context, such as in situations where empathy and inclusiveness need to be key parts of a decision. Their predictions reflect the training data they have been programmed on, with the historical biases and inaccuracies generated by humans in that past, even if social norms have since moved on. As such, to a large extent, AI advances are revealing just what is unique to human capabilities and our societies today that cannot be well replicated by a machine.
There are of course uncertainties about where ChatGPT and generative AI more generally will go from here as they improve over time, and what human capabilities they start to replicate. No one knows the exact form of the future and how the current ways of doing things will be affected by these technologies.
It will of course be essential to ensure that AI advances in a responsible, ethical and explainable way. But the broad canvas from history is clear: innovation and ingenuity are the keys to driving a continual expansion in our living standards.
As the capabilities of these technologies advance, we need the courage to believe that we – as humans – can write our own future success in ways that foster the goals of our society, and act accordingly.
David Orsmond, pictured centre, is a Professor of Economics and the Director of Policy and Communications at the Centre for Applied Artificial Intelligence at Macquarie University.
Amin Beheshti, pictured right, is a Professor of Data Science and the Director of the Centre for Applied Artificial Intelligence at Macquarie University.
Babak Abedin, pictured left, is the Head of Department of Actuarial Studies & Business Analytics and Program Leader of Responsible AI at the Centre for Applied Artificial Intelligence at Macquarie University.
The Centre for Applied Artificial Intelligence at Macquarie University helps organisations put Artificial Intelligence and Data Science at the centre of their capabilities to redefine how organisations create, capture and share value.