
Image : Vallabh soni (Generated with AI)
Language models (LMs) are known to suffer from a variety of infelicities of language such as hallucinations, inconsistencies, continuity and coherence problems, etc. Some of these difficulties suggest that LMs have no idea what they are saying when they “speak”. And yet LMs often also behave in ways that suggest that they do know what they are about, while humans sometimes behave in ways resembling LMs in their bad moments.
What should we make of all this? Are LMs really no different from humans or is there something else going on here? In this presentation Nuhu suggests that human and LM linguistic infelicities are not of a kind and that there is a deep difference between human speakers and LMs: the latter have while the latter lack metalinguistic agency and it is this difference which accounts for LM infelicities.
Nuhu Osman Attah is a Postdoctoral Research Fellow at the Australian National University working primarily with Colin Klein (and secondarily with Andrew Barron) on the Australian Research Council Discovery Project ‘Finding equivalence between natural and artificial intelligences’. He completed his PhD in 2024 in the department of History and Philosophy of Science at the University of Pittsburgh with a dissertation on conceptual and cognitive scientific issues surrounding Large Language Models.
Location
Speakers
- Nuhu Osman Attah (ANU)
Event Series
Contact
- Alexandre Duval