理解

一个智能代理机器人要完成我们交给他的一件工作,需要有对世界的理解。这就是说他要有关于世界运行方式的模型(World Model)。

大语言模型“理解”语言吗?理解语言中包含的意义吗?对于教育、运维等场景下的各种概念,它理解吗?对我们想做异常的追踪,它理解什么是“异常”吗?它理解什么是“追踪”吗?它又理解“追踪”的逻辑吗?

课程材料

论文

Choi 老师论文

Generation vs understanding (Yejin Choi)

华盛顿大学课程推荐论文

1: Do neural language models understand language?

神经语言模型“理解”语言吗?

What does it mean to understand language? Can models learn language without embodiment? Can we (even) separate the form and meaning? Does it matter if language models do or don’t?

理解语言意味着什么?模型可以在没有具体化的情况下学习语言吗?我们(甚至)可以将形式和意义分开吗?语言模型有或没有有关系吗?

  1. Language models as agent models (Andreas, 2022)
  2. To Dissect an Octopus: Making Sense of the Form/Meaning Debate (Blog, Julian Micheal, 2020)
  3. Is it possible for language models to achieve language understanding? (Blog, Christopher Potts, 2020)
  4. Implicit Representations of Meaning in Neural Language Models (Li, 2021)
  5. How could we know if Large Language Models understand language? (Blog, Sean Trott, 2022)
  6. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender et al., 2021)
  7. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Emily M. Bender, Alexander Koller, 2020)
  8. Mapping Language Models to Grounded Conceptual Spaces (Patel, 2022)
  9. Dissociating language and thought in large language models: a cognitive perspective (2023)

LLM Agent 论文

复旦大学的 LLM Agent 综述论文中提到的LLM 理解人类的能力相关论文

High-quality generation

Deep understanding

##

Index Previous Next