AI-powered ChatGPT does 'Amazing Job,' passes Wharton Business School exam in study trials
Source: Sputniknews 24.1.2023
ChatGPT, a new language AI chatbot developed by artificial intelligence (AI) research company OpenAI and released at the end of 2022, albeit still in its infancy, can be used for natural language processing tasks, with experts split over its uncanny potential.
ChatGPT, an artificial intelligence tool released by a research lab called OpenAI last year, has performed exceptionally well on the final exam of a typical master of business administration (MBA) core course, Operations Management, according to new research conducted by the Wharton School of the University of Pennsylvania.
The AI chatbot displayed a "remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants," according to Wharton
Professor Christian Terwiesch.
The language processing system, where GPT stands for "Generative Pre-trained Transformer," did “an amazing job at basic operations management and process analysis questions including those that are based on case studies," the research paper documenting how ChatGPT performed on the test stated.
The tool, capable of meaningful interaction with human users, also “performed well in the preparation of legal documents and some believe that the next generation of this technology might even be able to pass the bar exam.”
Christian Terwiesch noted that ChatGPT “would have received a B to B- grade on the exam.”
OpenAI, an artificial intelligence research and development company founded by Elon Musk, Sam Altman, and others in 2015, introduced ChatGPT in November 2022.
"The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” the system’s bio on OpenAI reads.
The language generation AI too capable of churning out a sophisticated text in response to prompts has triggered concerns of educators. The chatbot's use in school settings is deemed controversial and potentially detrimental to learning.
“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” New York City Department of Education spokesperson Jenna Lyle told media.
The department earlier this month banned the use of ChatGPT on public school networks and devices.
Professor Christian Terwiesch underscored in his research on ChatGPT that schools should look into exam policies and “curriculum design focusing on collaboration between human and AI.” The Wharton professor added:
“Prior to the introduction of calculators and other computing devices, many firms employed hundreds of employees whose task it was to manually perform mathematical operations such as multiplications or matrix inversions. Obviously, such tasks are now automated, and the value of the associated skills has dramatically decreased. In the same way any automation of the skills taught in our MBA programs could potentially reduce the value of an MBA education.”
The controversy around ChatGPT comes as the huge strides made in the use of AI in diverse aspects of human life have resulted in a plethora of ethical and societal dilemmas. More recently, news that a tech media site has been publishing articles generated "using automation technology" since November 2022 all the while keeping the experiment low-key set off a storm of indignation. Critics remarked that writing jobs appeared to have been taken over by artificial intelligence.
Two years ago, Microsoft sacked dozens of journalists to replace them with artificial intelligence software. At the time, one member of the culled workforce said: “I spend all my time reading about how automation and AI is going to take all our jobs, and here I am – AI has taken my job.”
The sacked journalists warned that use of AI for such news production tasks could land websites in trouble. Unfamiliar with strict editorial guidelines, the AI-powered tools editing news could let through "inappropriate" stories, they stated.
Evidence of this was not long in coming. Microsoft’s decision to opt for robot editors backfired when the AI system its news and search site employed confused two mixed-race members of a UK pop group, Little Mix.
The AI tools picked a story about Little Mix vocalist Jade Thirlwall’s brush with racism to appear on the site’s homepage, but illustrated it with a picture of Thirlwall’s bandmate Leigh-Anne Pinnock.
Indeed, AI products could churn out racist or otherwise offensive speech, as algorithms pick up concealed biases in training data.
Tools like ChatGPT are “making massive statistical associations among words and phrases... When they start then generating new language, they rely on those associations to generate the language, which itself can be biased in racist, sexist and other ways," according to Melanie Mitchell, a professor at the Santa Fe Institute studying artificial intelligence.