Do Language Models Know Language?

Authors

  • Gerald Penn Professor of Computer Science, University of Toronto, USA Author

DOI:

https://doi.org/10.47363/JAICC/ICMLAIDS2026/2026(5)3

Keywords:

Language, Models

Abstract

Triumphalist portraits of large language models (LLMs) boast that language models have mastered a level of language understanding that natural language processing (NLP) researchers have laboured for years to 
attain using complex architectures consisting of diverse component models that each require large amounts of training data.


 Do they?  How do we know?  And, if so, how do they manage this?
 In this talk, we will examine some recent results in relation to LLMs that cast doubt upon these claims, while affirming the utility of LLMs in present-day NLP.

Author Biography

  • Gerald Penn, Professor of Computer Science, University of Toronto, USA

    Gerald Penn, Professor of Computer Science, University of Toronto, USA 

Downloads

Published

2026-03-21

How to Cite

Do Language Models Know Language?. (2026). Journal of Artificial Intelligence & Cloud Computing, 5(2), 1-1. https://doi.org/10.47363/JAICC/ICMLAIDS2026/2026(5)3

Similar Articles

11-20 of 218

You may also start an advanced similarity search for this article.