If you haven’t already heard of ChatGPT, it probably wasn’t going to be long before you did.
The new artificial intelligence-based chatbot created by San Francisco-based OpenAI, which was co-founded by Elon Musk, has been creating waves across the internet with its writing ability and responses to requests.
Although it has impressed many with its abilities, not least Mr Musk — who described it as “scary good” — it has also raised concerns, particularly in the education sector.
Could it be about to knock Google off its perch as the go-to place for internet answers?
What is OpenAI?
It is a research company that says its mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.
It describes AGI as “highly autonomous systems that outperform humans at most economically valuable work”.
Mr Musk, the owner of Twitter, chief executive of electric car maker Tesla, founder and chief executive of space company SpaceX and co-founder of neurotechnology company Neuralink, left OpenAI in 2018, after disagreements over its direction.
“We have trained a model called ChatGPT, which interacts in a conversational way,” the company said.
“The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.”
On January 23, technology companyMicrosoft announced the third phase of its long-term partnership with OpenAI through a new multiyear, multibillion-dollar investment.
Microsoft previously invested in the company in 2019 and 2021.
“We formed our partnership with OpenAI around a shared ambition to responsibly advance cutting-edge AI research and democratise AI as a new technology platform,” said Satya Nadella, chairman and chief executive of Microsoft.
“In this next phase of our partnership, developers and organisations across industries will have access to the best AI infrastructure, models and toolchain with Azure to build and run their applications.”
What can ChatGPT be used for?
People have been trying it out across a range of techniques, from essay and poetry writing to scientific concepts to job application tasks, with the results being posted on social media.
It can even offer possible solutions to errors in computer code.
“Its answer to the question ‘what to do if someone has a heart attack’ was incredibly clear and relevant,” Claude de Loupy, head of Syllabs, a French company specialised in automatic text generation, told AFP.
“When you start asking very specific questions, ChatGPT’s response can be off the mark”, but its overall performance remains “really impressive”, with a “high linguistic level”, he said.
Some users have posed the question of whether it can be used journalistically.
I asked it to write a generic article on Dubai and it immediately generated about 250 words of text, which ended with: “Overall, Dubai is a fascinating destination that offers something for everyone, from the thrill-seekers to the shopaholics to those seeking a taste of Middle Eastern culture.”
However, it can pick up misinformation and present it as fact, and there are concerns that it lacks nuance and may be used for harmful requests.
“While we have made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour,” OpenAI said.
ChatGTP and other artificial intelligence tools are being used by about a third of professionals to help them with their jobs, a survey by Fishbowl, a social platform owned by employer review site Glassdoor, has revealed.
Marketing and advertising professionals have the highest adoption of AI, with 37 per cent saying they use it, while 35 per cent of workers in technology and 30 per cent in consulting gave a positive response.
It passed an MBA exam at Wharton?
Quite the performer, isn’t it. It was put through its paces by the final exam of a typical MBA core course, Operations Management.
Mack Institute for Innovation Management co-director Christian Terwiesch published his findings, saying that ChatGPT did an amazing job at basic operations management and process analysis questions, including those that are based on case studies.
Not only are the answers correct, but the explanations are excellent, he said, but did add that it made surprising mistakes in relatively simple calculations, which could be “massive in magnitude”.
“ChatGPT3 is remarkably good at modifying its answers in response to human hints,” he said.
“In other words, in the instances where it initially failed to match the problem with the right solution method, Chat GPT3 was able to correct itself after receiving an appropriate hint from a human expert. Considering this performance, Chat GPT3 would have received a B to B- grade on the exam.”
Has it been banned in New York City schools?
Yes, amid concerns about the safety and accuracy of the content produced.
New York City schools said the technology will be banned across the district, but specific schools or sites will be able to request access to give students access to a cutting-edge tech education.
“Due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content, access to ChatGPT is restricted on New York City Public Schools’ networks and devices,” Jenna Lyle, a spokeswoman for the city’s Department of Education, said.
“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”
Teachers around the world are naturally concerned that pupils will copy and paste from ChatGPT and produce auto-generated work.
Pupils can, however, still access it from their home or on their phone.
OpenAI officials say they are working on ways to identify text generated by the bot.
“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” the company said.
How does ChatGPT work?
When I asked ChatGPT itself how it functions, it wasn’t able to explain.
“I’m sorry, but I am not familiar with ChatGPT”, it responded. “I am a language model trained by OpenAI, and I don’t have the ability to browse the internet or learn about other AI models. I am only able to provide information based on what I have been trained on and what I can generate from that information.”
However, what it actually does is use a massive sample of text from the internet to give the most relevant response to your query.
OpenAI co-founder and chief executive Sam Altman said on Twitter that this was an “early demo of what is possible”.
“Soon you will be able to have helpful assistants that talk to you, answer questions and give advice,” he tweeted.
“Later, you can have something that goes off and does tasks for you. Eventually, you can have something that goes off and discovers new knowledge for you.”
What has Elon Musk said?
He described ChatGPT as “scary good” in a tweet and said: “We are not far from dangerously strong AI.”
He then tweeted on December 4 that he had learnt “that OpenAI had access to Twitter database for training. I put that on pause for now. Need to understand more about governance structure and revenue plans going forward. OpenAI was started as open source & non-profit. Neither is still true.”
How much is OpenAI worth?
The company is in talks to sell shares in a tender offering valuing it at about $29 billion, The Wall Street Journal reported last week.
Venture capital firms Thrive Capital and Founders Fund are in discussions to invest in the deal, which would include the sale of at least $300 million of shares from existing investors, such as employees, the WSJ report said.
The transaction would almost double the company’s valuation from a tender offer in 2021 and would make it one of the most valuable US start-ups on paper, despite having little revenue, it added.
The company makes money by charging developers to licence its technology.
Chatbots making headlines
Google fired senior software engineer Blake Lemoine in July last year after he claimed that the company’s conversational chatbot had become sentient.
He claimed that Google’s Language Model for Dialogue Applications (LaMDA), a system for building chatbots, had come to life and has been able to perceive or feel things.
Google said Mr Lemoine had breached company policy regarding confidential matters and described his claims as “wholly unfounded”.
– This article was first published on December 5, 2022
Updated: January 25, 2023, 5:58 AM