Abstract
Recent applications of music-generating AI can now generate pieces of different musical genres with unexpected levels of expressiveness. An old and common criticism of algorithmically generated music stems from its supposed inability to express human emotions. The extent to which the expressiveness of pieces generated by deep generative models approaches that of performances recorded by humans remains a tricky empirical question. Assuming that a certain level of expressivity (even if not always deep or nuanced) is achieved by recent generative models, it is worth examining what remains of the inexpressivity objection. A common version of this objection is that superficial appearances of musical expressivity in the production of generative models are not enough to show true musical expressivity. My paper explores different ways of articulating and evaluating this argument in the light of evidence concerning deep generative models and contemporary philosophical theories of musical expressivity. I argue that the most influential philosophical theories of musical expressivity lead to the view that AI-generated music exhibits genuine expressivity. Another lesson I propose to draw from recent examples of artificial musical expressivity is that the importance of expressivity as such to the value of music may have been exaggerated, as opposed to the implementation of musical expressivity as part of a compositional or interpretive project.