ChatGPT’s newest update, GPT 4, is advanced enough to pass a bar exam with a score in the top 10% of test takers, compared to the previous version which made it to the bottom 10%, Open AI announced this week.
In comparison to the previous version, GPT 3.5, the conversational abilities of GPT 4 are almost the same with only subtle changes. The main differences between the two are expressed in tasks that come with nuanced instructions.
GPT 4 was proven to be more reliable and able to use its creativity to excel in such tasks better than GPT 3.5.
The new version was given real versions of a variety of tests including a bar exam, the GRE, medical knowledge and high school AP class tests. In most of these exams, GPT 4 placed in the 80th percentile or higher. The exams in which it placed low were the writing section of the GRE,USNCO Local Section Exam, codeforces rating and AP Calculus BC.
ChatGPT can also succeed on benchmarks for machine learning models
GPT 4 was also tested on a number of benchmarks designed for machine learning models and outperformed GPT 3.5 on every one of the tests. As these tests are usually written in English, some of the tests were translated into other languages. GPT 4 did better than 3.5 in 24 of 26 languages.
The emergence of ChatGPT, which can mimic normal human language based on prompts, gave rise to concerns of cheating in the education system. Teachers are worried that students will now use ChatGPT to write their papers for them.
Perhaps the fact that GPT 4 performed poorly on the writing section of the GRE will assure teachers that despite GPT 4’s more advanced abilities, it is still not good enough to fool them.
Another new feature that GPT 4 now has is visual input. This means that photos can now be used as part of the prompt for the AI’s output.