Abstract
The latest advances in generative AI are based on a technique called "Chain of Thought" (CoT), which involves making the machine "think" before responding. This was originally a strategy used in queries addressed to classic Large Language Models (LLMs), consisting in asking in the query for the reasoning leading to the result before the result itself, rather than directly and solely the result (Wei 2023). This approach has since been internalized, with LLMs such as openAI's o1 or o3 or DeepSeek's r1 natively integrating CoT.
CoT represents a major technological advance, with significant performance improvements for complex tasks. Beyond this, in this talk I would like to question the significance of this technology for philosophical debates concerning the nature of skills in generative AI models, debates that pit those who believe that such models possess cognitive skills in their own right against those who consider that they merely simulate such skills. More specifically, I will argue that these debates need to be settled on the basis of a functional analysis of LLMs, and that the functional analysis of LLMs nativelyintegrating CoT provides new arguments against the simulationist thesis.