+

Apple claims ReALM works better than ChatGPT

Apple researchers have released a preprint paper on its ReALM large language model, claiming it can “substantially outperform” OpenAI’s GPT-4 in particular benchmarks. ReALM is said to understand and handle different contexts, potentially allowing users to query the language model about elements on the screen or running in the background. “Reference resolution is a linguistic […]

Apple researchers have released a preprint paper on its ReALM large language model, claiming it can “substantially outperform” OpenAI’s GPT-4 in particular benchmarks. ReALM is said to understand and handle different contexts, potentially allowing users to query the language model about elements on the screen or running in the background.

“Reference resolution is a linguistic problem of understanding what a particular expression is referring to… But a chatbot like ChatGPT may sometimes struggle to understand exactly what you are referring to,” the paper explains.

Apple aims to enable chatbots to understand what is being referred to, even if it’s not explicitly stated, which is crucial for creating a hands-free screen experience.

“We demonstrate large improvements over an existing system… our larger models substantially outperforming [GPT-4],” the researchers wrote.

ReALM’s goal is to understand and identify three kinds of entities: on-screen entities, conversational entities, and background entities. On-screen entities refer to things displayed on the user’s screen, conversational entities are relevant to the conversation, and background entities are other relevant elements not displayed.

“We also benchmark against GPT-3.5 and GPT-4, with our smallest model achieving performance comparable to that of GPT-4, and our larger models substantially outperforming it,” the researchers stated.

It’s noted that while ReALM works better than GPT-4 in this benchmark, it doesn’t mean ReALM is a better model overall.

“While we believe it might be possible to further improve results… we leave this to future work,” the researchers added.

Apple’s papers could be seen as teasers of features that may be included in its software offerings like iOS and macOS.

“While it is still early to predict anything, these papers could be thought of as an early teaser of features that the company plans to include in its software offerings like iOS and macOS,” the report concluded.

Tags:

AI Language ModelAppleChatGPTReALM