‘Customizable’ Upgrade Could Result in More Divisive ChatGPT Answers

Feel like ChatGPT is too tame with its answers? The program’s creator, OpenAI, is working on an upgrade that could unlock more personality and controversial takes from the popular chatbot. 

“We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the San Francisco company said in a blog post(Opens in a new window).

OpenAI mentioned the upcoming upgrade to address concerns that ChatGPT was programmed with a bias on politically and culturally sensitive topics. This includes users showing that ChatGPT will write positive poems about current US President Joe Biden, but not his Republican rival, Donald Trump. 

To resolve the bias concerns, OpenAI is working on an upgrade that would give ChatGPT greater freedom to respond to a user’s query. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with,” the company said. 

ChatGPT will try to maintain political neutrality and objectivity on topics like Donald Trump.
(Credit: OpenAI)

The news is already sparking concern that the customizable ChatGPT could end up promoting controversial ideologies or take sides in the US’ culture war.

“Our hope was that OpenAI would open up their moderation policies to the public and live by them, centering harmed communities’ voices, and striving to prevent harm. Instead, they appear to be doing THE EXACT OPPOSITE,” tweeted(Opens in a new window) Liz O’Sullivan, a member of the National AI Advisory Committee.

However, the customization upgrade will still contain guardrails to stop ChatGPT from engaging in potentially malicious behavior. OpenAI also wants to prevent the chatbot from becoming a “sycophantic” AI that’ll “mindlessly amplify people’s existing beliefs.”

“There will therefore always be some bounds on system behavior. The challenge is defining what those bounds are,” the company said. “If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to ‘avoid undue concentration of power.’”

As a result, OpenAI plans on taking input from the public on how to steer ChatGPT’s development. The results could produce several versions of ChatGPT co-existing alongside each other, as one of the company’s graph shows.  

(Credit: OpenAI)

But for now, efforts to gather public input remain in the early stages. “We are also exploring partnerships with external organizations to conduct third-party audits of our safety and policy efforts,” the company added. 

The blog post from OpenAI also tries to offer some transparency about why ChatGPT can possess some bias on sensitive political topics and cultural issues. The behavior isn’t deliberate. Unlike a database, which can generate uniform answers, ChatGPT operates as a large language model trained on libraries of internet data, including news articles, books, and social media posts. It’ll then try to autocomplete a human-like response with every query.  

Recommended by Our Editors

“Since we cannot predict all the possible inputs that future users may put into our system, we do not write detailed instructions for every input that ChatGPT will encounter,” the company said. “Instead, we outline a few categories in the guidelines that our (human) reviewers use to review and rate possible model outputs for a range of example inputs.”

The goal of the human reviewers is to fine-tune ChatGPT to generate more accurate responses on narrow questions. However, the fine-tuning process remains “imperfect” because ChatGPT can “generalize” the feedback from a human reviewer and apply it to wide range of questions from a user, the company said. 

“Towards that end, we are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” OpenAI added. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should. We believe that improvement in both respects is possible.”

To offer more transparency, the company published a three-page snapshot of the guidelines(Opens in a new window) OpenAI has given to human reviewers on how to fine-tune ChatGPT. “Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features,” the company said. 

OpenAI also plans on publishing aggregated demographic data on the human reviewers the company has been using to polish ChatGPT.

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Add Comment

Exit mobile version