23
Fev
Patrick Conolly sobre "Artificial Implicature: Chatbots in Conversation"
Seminário
16:00 às 18:00
Colégio Almada Negreiros e online

Sessão do Seminário Aberto com Patrick Conolly (Universidade de Barcelona) sobre “Artificial Implicature: Chatbots in Conversation”. O seminário será dado em inglês e terá lugar na sexta-feira, dia 23 de fevereiro, às 16:00, na sala SD do Colégio Almada Negreiros, e online, via Zoom (através deste link). Todos são bem-vindos.

O Open Seminar é organizado por E. Rast (ArgLAB/IFILNOVA). Esta série de seminários tem como objetivo providenciar aos investigadores uma plataforma para discutirem o trabalho em curso, assim como problemas na filosofia da linguagem, epistemologia, argumentação, metaética e áreas relacionadas. Para questões administrativas, entre em contacto com Erich Rast em erich@snafu.de.

Abstract

The problem I consider in this talk emerges from the tension we find when we look at the design and architecture of chatbots on the one hand and consider their conversational aptitude on the other. In the way that LLM chatbots (such as ChatGPT, Bard and Claude) are designed and built, there seems no good reason to suppose they possess second-order capacities such as intention, belief or knowledge. Yet we have developed theories of conversation that make great use of second-order capacities of speakers and their audiences to explain how aspects of conversation succeed. As we can all bear witness to now though, at the point of use chatbots appear capable of performing language tasks at a level close to that of humans. This creates a tension when we consider something like, for example, the classic Gricean theory of implicature. On a broad summary of this type of account, to utter p and implicate q requires the reflexive occurrence of an audience supposing a speaker believes that q, and the speaker believing that their audience can determine they believe it when they utter p. So taken at face value, if a chatbot doesn’t have the capacity for belief, then either in their role as speaker or audience, they would not seem capable of either generating or comprehending implicatures. However, on the surface at least, it does seem that chatbots are capable of dealing with (some) implicatures, and as such it raises questions about how we should then correlate this with what we think occurs in cases of implicature with chatbots.