In a recent article, tech journalist Geoffrey A. Fowler shares his experience of testing OpenAI’s latest version of ChatGPT, GPT-4. One of the most talked-about tech launches of the year, GPT-4 has been highly anticipated due to its advanced language processing capabilities, and its potential to be used in a wide range of commercial applications. However, while the AI’s ability to solve logic puzzles is impressive, Fowler is quick to point out that it is not a sign that AI is suddenly as smart as a human.
During his tests, Fowler fed the AI a series of questions from the logical reasoning portion of the LSAT, a test used for law school admissions. While GPT-4 performed well, answering the questions like a competent law student, it still struggled to craft an opening paragraph in Fowler’s writing style, highlighting the limitations of the technology.
Despite these limitations, OpenAI has already introduced GPT-4 into commercial products, including Duolingo and Khan Academy, for language teaching and tutoring. Microsoft has also revealed that it has been using a version of GPT-4 in its Bing chatbot since February.
Fowler raises important questions about the ways in which AI’s new strengths and weaknesses may impact work, education, and even human relationships. While GPT-4 may not be “as smart as a lawyer” yet, its development adds to the ongoing debate about how AI can be used responsibly and ethically.