And with new educational design comes greater opportunity for a new, more diverse generation of aspiring business leaders, levelling the playing field. “ChatGPT is enabling people to interact with machines in a more natural and conversational manner. This is helping them to access information which was earlier limited to only skilled information seekers,” says Professor Gaurav Gupta – Associate Professor at NEOMA Business School.
“For example, it has allowed people with limited language and browsing skills to seek and find exotic information.” Not only is a wider array of information available, but ever-smarter AI is enabling students to begin querying what they find. “In classrooms, it has started to become an indispensable tool for students, allowing them to often challenge as well as cooperatively learn from this text companion along with the teacher,” he continues.
With such educational revolution comes a sense of responsibility. “By embracing ChatGPT and other AI models as a valuable tool for learning and innovation, business schools have an opportunity to ensure students stay at the forefront of the rapidly evolving technological landscape,” affirms Trinity’s Dr Na Fu. Etemad-sajadi at EHL agrees.
“Curricula and programs must be ready to adapt to the ways in which various professions evolve over time because AI will transform the skills that recruiters will be looking for in the future,” he says. Unsurprisingly, Fu and Etemad-sajadi are not alone in their way of thinking. “Students must continue to develop their expected competencies,” says Rodríguez Engelmann. “Once they enter the workforce, they need to be valued because of their new competencies to solve complex problems, not just for what they know.”
“By embracing ChatGPT and other AI models as a valuable tool for learning and innovation, business schools have an opportunity to ensure students stay at the forefront of the rapidly evolving technological landscape,” Dr Na Fu.Pic, Trinity Business School [Paul Sharp/SHARPPIX] And responsibility extends beyond a duty to keep skills development up-to-date.
“As an AI model, ChatGPT is constantly evolving and its potential impact on society is still unknown,” says Fu. “It is important to use these tools ethically and responsibly, and to recognize that it is ultimately up to humans to determine how AI is used. By using AI tools in responsible and innovative ways, we can maximize its positive impact and mitigate its potential negative consequences.”
The way forward, according to Phanish Puranam, Professor of Strategy and the Roland Berger Chaired Professor of Strategy and Organisation Design at INSEAD, lies in effective collaboration, noting the potential for new AI to boost our capabilities, despite the many ways in which it is beginning to encroach on what were, perhaps until now, exclusively human qualities.
“Now, generative AI is demonstrating that it can pass the Turing test for creativity in practice as well. This means it will be increasingly harder to distinguish human generated from machine-generated creative content-which, in turn will raise the bar for human creativity,” he suggests. “The “hacks” in any creative field are undoubtedly in trouble, but some users will amplify their creativity.
Such amplification may already be underway. Thomas Gauthier, Professor or Strategy and Organisation at emlyon business school, sees a future in which business schools can encourage students to use the significant advancements in AI to better themselves. “On May 11th, 1997, Garry Kasparov became the first human World Chess Champion to lose to an AI – IBM’s Deep Blue. Did Kasparov quit playing chess? He didn’t.”
“One year later, he convened and participated in the world’s first “Centaur Chess” game,”Gauthier recounts, “in which a human player and an AI teamed up. Business school students, and faculty members, may well turn into Centaurs too. Not raging against the machine, but rather collaborating with it to engage in unprecedented learning experiences.”
But, as always, there are cautions to be raised. “At the same time, developing a dependence on these technologies before building one’s own creative thinking muscles can stunt development,” warns INSEAD’s Puranam. Not only are students at risk here, but institutions too.
“ChatGPT has finally pushed the point home to my students that they need to be on top of developments in AI – this is not a choice.” Phanish Puraman, Professor of Strategy at INSEAD. “In particular, the potential impact of ChatGPT being misused by students could have serious consequences on the reputation of business schools whose graduates that bring less than expected value to a hiring organisation than commensurate with the standing of the school,” warns Russell Miller, Director of Learning Solutions at Imperial College Business School.
Gupta agrees, “On the other hand, the aim of such chatbots to provide the “one right answer for each question“ will curtail divisive and outlying ideas and beliefs. Its overt attempts to serve information that is palatable to the greatest common denominator of its clientele will tend to curtain abnormal and alternative expressions. Imparting education that provokes curiosity and dissonance will become more difficult.”
But here, such vulnerabilities and a pursuit for accuracy might, conversely bring things full circle. “Paradoxically, this would be a chance to return to the ‘moment of truth’, i.e., face-to-face contact between the professor and the student,” Dr Reza Etemad-sajadi says. “After focusing on digitalization for the past 20 years, this would be a return to the past where socialization becomes even more important. Old fashioned paper and oral exams might make a comeback, which could be a good thing.”
But forewarned is forearmed. Whilst institutions grapple with how to tell the human from the machine, and prevent students from cutting corners, Imperial’s Russell Miller suggests that, in some ways, we’ve been here before, and there are lessons we can learn from the past. “It seems to me that the challenge is not dissimilar to that of the late 90s when the internet was beginning to get mainstream traction,” he says.
“Back then, as now, business schools were wrestling with issues of what the technology would mean in terms of plagiarism and other considerations (arguably) linked to a perceived loss of control. The answer to the question is both help and hinder, and it’s the human element in all of this that will determine whether Chat GPT and similar will be a force for good in the education world.”
“The good news,” he continues, “is that by and large society has proven very good at adopting disruptive technologies.” Puranam, like Miller, offers a reassuring ray of optimism. “I am not worried about cheating in exams or stunting thinking skills- those are easy problems to fix if we put any thought into it,” he says. “As faculty, we have been trying various means to make our students savvy about AI, machine learning, and programming.
ChatGPT has finally pushed the point home to my students that they need to be on top of developments in AI – this is not a choice. As I tell my students, they should worry less about ChatGPT taking their jobs (at least today), and more about somebody who knows how to use it taking their job!” Such sentiment may ring true for educators as well. Those institutions which fail to harness the power of ChatGPT might well find themselves at a sudden, and significant disadvantage. The genie can’t be put back in the bottle.
Originally published by Forbes.