Artificial intelligence is beginning to disappoint. Performance drops, many complain, supported by the studies of two prestigious American universities: Stanford University and the University of California Berkeley. In fact, three researchers questioned the consistent performance of OpenAI’s large language models, especially Gpt-3.5 and Gpt-4.
Lingjiao Chen, Matei Zaharia, and James Zou I tested the March and June 2023 editions on tasks like solving math problems, answering sensitive questions, creating code, and visual reasoning. The result is that a superintelligent artificial intelligence has collapsed on mathematics. In short, ChatGpt4 fails to identify prime numbers above all else, or rather it just vanished from precision 97.6% in March to just 2.4% in June. It has basically adapted to human precision.
Among the theories about regression – An attempt to save computational power to speed up program responses, but also a move by the company to charge users for extra capacity. “We didn’t make Gpt-4 stupid, when you use it extensively you start to notice problems you haven’t seen before”Peter Welinde, vice president of OpenAi, the company that created ChatGpt, wrote in a tweet. Arvind Narayanan, a professor of computer science at Princeton University, believes that the study results do not definitively show a decrease in Gpt-4 performance and may be compatible with OpenAI modifications.
© Reproduction Reserved