Abstract
The linguistic and other productions of certain artificial intelligences are becoming increasingly similar to human productions. In this context, it often seems appropriate to attribute intentional states to these programs. Nevertheless, such an attribution of intentionality raises difficulties for connectionist AIs. While it is indeed easy to identify the representations that serve as vehicles for the computational processes of classical AIs, this task is far more difficult for neural networks: we consider that information is represented by these programs as patterns of activations, in a distributed rather than local way. In our presentation, we will discuss two main issues: on the one hand, how representational vehicles can be identified in neural networks; and on the other, the type of information content that can be attributed to these vehicles. This will lead us into the recent debate on the putative intentional states of Large Language Models.