Asia School of Business

Edit Content

How will AI impact the future of education?

  • Future leaders are going to need to understand AI in order to create and maintain policies and use it correctly. This is where we look to education leaders.
  • AACSB’s 2022 business trends report stated that upcoming technological advancements, such as VR, would make waves in universities and help them to become more diverse.
  • For the schools that get it right, AI can be a valuable learning tool, professors reveal.

The fate of the world has been up for debate ever since the launch of ChatGPT almost two years ago. It seems that every time I open a news or social media app there is another article about the doom and gloom that the bot will bring. And, while I laugh, the fact is that ChatGPT could change education in many ways, potentially ways we currently can’t even imagine, considering how fast it’s progressing.

The more Artificial Intelligence (AI) develops, the more questions people have. Is it going to affect jobs? Which jobs, and how? A report from Goldman Sachs claimed that AI could replace an astounding 300 million jobs.

In terms of which jobs are most likely to be affected, that answer seems to change a lot. But haven’t we already seen jobs replaced by technology? How often do you try to contact customer service and find yourself calling your details out to a bot on the other end of the phone? Or, worse, when you have to ‘chat’ to a bot via a text-like system.

The multiple news articles about ChatGPT all had one thing in common: doom and gloom. Personally, these stories remind me of two things: the Charlie and the Chocolate Factory scene where everyone was dismissed from the toothpaste factory because robots could now do their jobs for them, and the Black Mirror Christmas special where the woman becomes an Alexa-like object.

Neither option is sounding too attractive!

OpenAI’s chatbot, ChatGPT, can provide more than just a simple answer to a question; it gives the impression that it is really thinking, creating and potentially even empathizing. The site has grown rapidly, with 2.5 billion site visitors in the last three months alone. It’s worth remembering that ChatGPT hasn’t even reached its second birthday yet, so we really don’t understand how big it is going to be.

Since ChatGPT’s launch, companies such as Microsoft and Google have released their version of the bot, in order to keep up with their new competition. It is safe to say that since the arrival of ChatGPT, the internet is no longer the same.

Addressing AI safety concerns

With the advancement of AI capabilities, one of the biggest concerns was the dangers AI could present.

It seems to be designed to deliberately deceive users. Almost every day there is a different scam in the news, such as the recent scam calls that have terrorized people online. More worryingly, the new “deepfake” images that have been going viral present a whole new threat. Some have been used as propaganda, such as Trump’s photos of Kamala Harris or Taylor Swift. But others are being used in even more sinister ways, such as deepfake porn. There is no doubt that AI and ChatGPT present a vast range of incredibly dangerous and negative developments as well as positive.

The CEO of Open AI, Sam Altman, openly admits that the need for regulations around AI is vital. He spoke to the US Congress in May last year about his creation, and the ways he believes it should be used and handled. He advocated for an independent agency to oversee all AI models before they are released and wants the most powerful models to adhere to licensing, testing and safety requirements.

Every country will do things differently. Businesses will need to work closely with the government, and will therefore look to business schools and their graduates for information to do so.

Since Altman’s pleas, and the ever-growing evidence to support his worries, the European Union has created their first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. But what does this really mean?

The law, known as The AI Act, places restrictions on what are considered to be technology’s riskiest uses. Therefore, it would curtail the use of facial recognition software, and require the creators of AI systems such as ChatGPT to disclose more about the data used to create their programs. The final version of the law was published in the EU Official Journal in July of this year, proving that the EU is still further ahead in the process of regulating AI than its global counterparts.

Policymakers from all around the world are now racing to control the evolving technology, one that is growing more rapidly than any of its predecessors. In the US, the White House had released policy ideas that include rules for testing AI systems before they are made publicly available. Governments near and far are also hoping to take some control over how the makers of AI will use data, and how they will enforce privacy laws.

Part of the problem when addressing safety and responsibility in the creation and the use of AI is that it changes so rapidly that effective regulation of AI can often become outdated too quickly.

Naturally, the regulations are facing scrutiny from some industries, with one tech group claiming that if the regulations are too broad, they could prevent further innovation in the world of AI. And whilst Altman has been asking for regulation, he also believes the EU’s proposal might be difficult to comply with.

Ethics in AI is a complex issue, and Professor Zorina Alliata from Open Institute of Education (OPIT) makes the point that ethics in AI is still developing. “Generative AI will create new content based on chunks of text it finds in its training data, without an understanding of what the content means. It could repeat something it learned from one Reddit user ten years ago that could be factually incorrect. Is that piece of information unbiased and fair?”

Regulation in AI starts from the bottom, and from the people responsible for it, says Alliata. “If you look around the table and see the same type of guys who went to the same schools, you will get exactly one original idea from them. If you add different genders, different ages, different tenures, different backgrounds, then you will get ten innovative ideas for your product, and you will have addressed biases you’ve never even thought of.”

In terms of safety and responsibility when using AI, there is a long way to go and many more debates to be had. For now, we must look to education to help us understand it. Future leaders are going to need to understand AI in order to create and maintain policies, and use it in the right areas. This is where we look to education leaders. They are shaping the minds of the next generation of leaders: they should use it, teach it and engage with it. The sooner it is utilized, the better. That way, the regulation can be based on fact and experience, rather than ‘what ifs’.

How can AI affect education?

Higher education institutions are doing all that they can to learn about AI in order to prepare their students for the world of business. Similarly, companies are looking to business schools for answers. Right now, it is all about learning and adapting, after all, in the grand scheme of things, ChatGPT is still only in its early phases.

In fact, AACSB’s 2022 business trends report stated that upcoming technological advancements, such as VR, would make waves in universities and help them to become more diverse. The benefits of introducing AI into the classroom seemed huge. Since then, conversations around AI tools have become less about what AI can do for us, but instead what it can do to us. AI tools have surpassed everyone’s expectations, with their capabilities ranging from defeating you at chess to acing the GMAT. So, where do education leaders stand on the topic?

To avoid disaster, schools should look at how they are using human faculty. Do they have the right knowledge? Can they adapt, and how quickly? Getting ahead of AI will be the most effective antidote to the media’s fearmongering.

For Sanjay Sarma, the CEO, President and Dean of Asia School of Business, the best option is to take control now. He states that AI and ChatGPT have the potential to “make individuals superhuman, but much like the domestication of the horse, it is all about those that learn to ride.” Therefore, if we adapt now and learn how to use these tools within education, it does not have to be scary. He believes that this is the best time for an educational revolution. “In a post-GPT world, the system is capable of doing well. It is human instinct that remains. So, classroom learning needs to be revolutionized.”

His sentiments were echoed by Professor Reza Etemad-Sajadi from EHL Hospitality Business School, who said “It would be a mistake to see it as a threat and, regardless, we have no choice. We will have to adapt to this kind of technology in the future.” By fearing AI, we are only delaying the inevitable. It is here to stay, and the sooner we adapt and learn how to use it the better the outcome will be.

For Phanish Puranam, Professor of Strategy and the Roland Berger Chaired Professor of Strategy and Organization Design at INSEAD, the future of AI in Business Schools is bright. “As I tell my students, they should worry less about ChatGPT taking their jobs (at least today), and more about somebody who knows how to use it taking their jobs!” However, he warns students that if we don’t adapt and change with AI and ChatGPT that we will end up “losing human-centricity in organizations of the future.”

The more we adapt to using AI in education, the more we can prepare to use it in the future, which will prevent our worst fear of AI taking over. “I am a lot more optimistic for business schools than for societal impact at large.”

Can AI be a positive teaching tool?

As we know, AI bots will answer almost any question you ask it. By doing so, they provide a learning tool for those who want to use it to explain things they struggle to understand. Russell Miller – Director of Learning Solutions and Innovation at Imperial College Business School Executive Education, believes that students can utilize this for good.

He says: “Brainstorming/ideation may be drastically improved by incorporating wild and unorthodox ideas from generative AI that might inspire people to come up with interesting new products and services.”

Creativity in education, particularly that of business education, is the way forward. If we can make use of AI to help us to become more creative, society will improve. The world of business education needs to keep up with technological advancements in order to progress, and to prepare students for the world they are going to enter.

For now, humans must remain in the driving seat. We can’t allow AI to lead us, we must lead it. However, we need to be capable of thinking critically, according to Professor Francisco Veloso, Dean of INSEAD, “The already important skill of critical thinking will become much more salient, a must in terms of education.” Like his peers, Professor Veloso can appreciate all of the good that AI will bring, but he is correct in his concern for the need for critical thinking being used alongside AI.

As we approach ChatGPT’s second birthday, it is worth looking back at how far we have come even since then. The conversations being had have a more positive spin, and are full of wonder. We are here now; we can’t go back. There is no choice but to adapt to AI, because it can’t be undone.

While it took everyone by surprise, maybe we should be asking ‘what’s next?’ For the schools that get it right, the sky truly is the limit. With the right attitude, the right training, and the right research, this very big change could perhaps be the making of us.

By Georgina Tierney

Originally published by BlueSky Thinking.