Can LLMs reason logically? If not, how can we teach them?

画像: どのように言語モデルに論理推論を教えるのか? ー形式論理学に基づく人工コーパスによる訓練(英語)

Recent large language models (LLMs) have shown to be able to skillfully solve a wide range of tasks, foreshadowing the realization of artificial intelligence (AI) as "a machine that thinks like humans”[1]. To realize such AI, two elements have long been considered important: knowledge and reasoning.[2-7] In the context of natural language processing, “knowledge” refers to a collection of facts about the world, such as “things with mass generate a gravitational field” and “the Earth has mass.” On the other hand, “reasoning” is a form of thinking that combines multiple pieces of knowledge according to certain rules to gain new knowledge. For example, by applying the reasoning rule ∀x F(x)→G(x),F(a) ⇒ G(a) (F=”has mass”, G=”generates a gravitational field”, a=”Earth”) to the aforementioned knowledge, we can gain new knowledge that “the Earth generates a gravitational field.”

英語サイトへ

This article is a sponsored article by
''.