Trend AnalysisLinguistics & NLP
Pragmatics in Conversational AI: Can Chatbots Understand What We Really Mean?
Pragmatic competence, the ability to understand what speakers mean beyond what they literally say, remains one of the deepest challenges for conversational AI. Recent work evaluates chatbots against Gricean maxims and implicature theory.
By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.
When a dinner guest says "It's getting late," they typically mean "I want to leave," not merely that the clock shows a late hour. This gap between what is said and what is meant, the domain of pragmatics, represents perhaps the most fundamental challenge for conversational AI systems. While large language models have achieved impressive performance on tasks requiring syntactic and semantic competence, pragmatic competence, understanding implicature, indirect speech acts, presupposition, and conversational context, remains a frontier where AI systems regularly fail in ways that range from awkward to harmful. Grice's Cooperative Principle and its maxims (Quantity, Quality, Relation, Manner), along with speech act theory, provide the theoretical framework for evaluating whether AI systems truly participate in conversation or merely simulate participation.
Why It Matters
Conversational AI systems are deployed in contexts where pragmatic failure has real consequences. A healthcare chatbot that takes "I'm fine" literally when a patient is being stoic could miss critical symptoms. A customer service bot that responds to "Can you transfer me to a human?" by answering "Yes, I can" without actually transferring violates the pragmatics of indirect requests. An emotional companion chatbot that fails to detect conversational escalation through increasingly distressed implicatures could exacerbate mental health crises. As conversational AI moves from information retrieval to genuine interaction, pragmatic competence becomes not optional but essential.
For linguistics, AI systems provide a unique test bed for pragmatic theory. If a system that processes only textual patterns can approximate pragmatic behavior, this constrains theories about what pragmatic competence requires. If it cannot, the specific failure modes reveal which aspects of pragmatic processing are irreducible to pattern matching and require genuine social cognition.
The Science
Evaluating Chatbots Against Speech Act Theory
Aziz (2025) provides a systematic evaluation of whether AI chatbots follow the principles of Speech Act Theory and Grice's Cooperative Aziz (2025). The study analyzes AI-generated conversations for compliance with each Gricean maxim and for appropriate performance of illocutionary acts (asserting, requesting, promising, apologizing). The findings reveal a consistent pattern: chatbots generally respect the maxims of Quality (they avoid stating things they do not have evidence for) and Manner (they are reasonably clear), but frequently violate Quantity (providing too much or too little information) and Relation (including irrelevant elaborations). For speech acts, chatbots perform direct speech acts competently but struggle with indirect speech acts where the surface form diverges from the intended function, such as "Could you close the window?" functioning as a request rather than a question about ability.
Conversational Implicature in Human-AI Interaction
Salman and Matrood (2025) examine how conversational implicature, the meaning that is implied but not explicitly stated, functions in human-AI interactions. Their analysis reveals that AI systems face particular difficulty with three types of implicature: scalar implicature (where "some students passed" implies "not all students passed"), particularized conversational implicature (meaning derived from specific context), and ironic implicature (where the implied meaning is opposite to the literal meaning). The study identifies a fundamental asymmetry: human users naturally produce implicatures when talking to AI, expecting the same pragmatic processing they receive from human interlocutors, but AI systems process these utterances primarily at the literal level. This asymmetry is a major source of miscommunication in human-AI dialogue.
Computational Modeling of Scalar Implicature
Li et al. (2024) develop a formal computational model of scalar implicature using Bayesian methods, implementing a small dialogue system that can derive scalar implicatures from first principles. Their approach treats scalar implicature as a probabilistic inference problem: given that a speaker chose a weaker term (e.g., "some") when a stronger term was available (e.g., "all"), the listener infers that the stronger term does not apply. The Bayesian framework quantifies this inference by modeling the speaker's choice as a function of the state of the world and communicative goals. While the system operates in a constrained domain, it demonstrates that principled computational pragmatics is achievable and produces more accurate interpretations than purely literal processing.
Sentiment in Implicature Processing
Li and Xu (2025) connect pragmatics to sentiment analysis by developing a computational pragmatics approach to detecting sentiment in conversational implicatures. Their key insight is that the sentiment of an utterance often resides in its implicature rather than its literal content: "That's an interesting proposal" can be genuinely positive or devastatingly dismissive depending on conversational context. The study formalizes the relationship between response sentiment and implicature type, showing that sentiment classification accuracy improves significantly when pragmatic context is modeled explicitly rather than relying solely on lexical sentiment indicators. This work bridges two NLP subfields, sentiment analysis and computational pragmatics, that have developed largely independently.
Pragmatic Competence in Current AI Systems
<
| Pragmatic Phenomenon | AI Capability | Failure Mode | Required Advance |
|---|
| Direct speech acts | Strong | Rare failures | Largely solved for common types |
| Indirect speech acts | Moderate | Literal interpretation of requests/questions | Context-dependent intent recognition |
| Scalar implicature | Low-moderate | Missing "some โ all" inferences | Formal pragmatic reasoning |
| Particularized implicature | Low | Context-blind processing | Rich situation modeling |
| Irony and sarcasm | Low | Literal interpretation | Stance and social context modeling |
| Presupposition | Moderate | Fails to accommodate or challenge | Common ground tracking |
| Politeness strategies | Moderate | Overly direct or formulaic | Cultural pragmatic competence |
What To Watch
The most promising direction is the integration of pragmatic theory into LLM training and evaluation, rather than hoping that pragmatic competence emerges as a byproduct of scale. Benchmark suites that test specific pragmatic phenomena (the BIG-Bench pragmatics tasks, the Pragmatic Understanding benchmarks) are enabling systematic measurement of progress. The development of theory-of-mind capabilities in AI, enabling systems to model what their interlocutor knows, believes, and intends, is a prerequisite for genuine pragmatic competence, as implicature computation fundamentally requires reasoning about the speaker's mental state. Whether current transformer architectures can support this kind of reasoning, or whether new architectures are needed, remains one of AI's most important open questions.
Discover related work using ORAA ResearchBrain.
๋ฉด์ฑ
์กฐํญ: ์ด ๊ฒ์๋ฌผ์ ์ ๋ณด ์ ๊ณต์ ๋ชฉ์ ์ผ๋ก ํ ์ฐ๊ตฌ ๋ํฅ ๊ฐ์์ด๋ค. ํ์ ์ ์๋ฌผ์์ ์ธ์ฉํ๊ธฐ ์ ์ ๊ตฌ์ฒด์ ์ธ ์ฐ๊ตฌ ๊ฒฐ๊ณผ, ํต๊ณ ๋ฐ ์ฃผ์ฅ์ ์๋ฌธ ๋
ผ๋ฌธ๊ณผ ๋์กฐํ์ฌ ๊ฒ์ฆํด์ผ ํ๋ค.
๋ํํ AI์ ํ์ฉ๋ก : ์ฑ๋ด์ ์ฐ๋ฆฌ๊ฐ ์ง์ ์ผ๋ก ์๋ฏธํ๋ ๋ฐ๋ฅผ ์ดํดํ ์ ์๋๊ฐ?
์ ๋
์์ฌ ์๋์ด "์๊ฐ์ด ๊ฝค ๋๋ค์"๋ผ๊ณ ๋งํ ๋, ๊ทธ๊ฒ์ ๋จ์ํ ์๊ณ๊ฐ ๋ฆ์ ์๊ฐ์ ๊ฐ๋ฆฌํจ๋ค๋ ๋ป์ด ์๋๋ผ ๋๊ฐ "์ด์ ์๋ฆฌ๋ฅผ ๋ ๋๊ณ ์ถ๋ค"๋ ์๋ฏธ์ด๋ค. ๋ฐํ๋ ๊ฒ๊ณผ ์๋๋ ๊ฒ ์ฌ์ด์ ์ด ๊ฐ๊ทน, ์ฆ ํ์ฉ๋ก (pragmatics)์ ์์ญ์ ๋ํํ AI ์์คํ
์ด ์ง๋ฉดํ ๊ฐ์ฅ ๊ทผ๋ณธ์ ์ธ ๋์ ๊ณผ์ ์ผ ๊ฒ์ด๋ค. ๋ํ ์ธ์ด ๋ชจ๋ธ(large language model)์ด ํต์ฌ์ ยท์๋ฏธ์ ์ญ๋์ ์๊ตฌํ๋ ๊ณผ์ ์์ ์ธ์์ ์ธ ์ฑ๋ฅ์ ๋ฌ์ฑํ์ง๋ง, ํ์ฉ์ ์ญ๋, ์ฆ ํจ์(implicature), ๊ฐ์ ํํ(indirect speech act), ์ ์ (presupposition), ๋ํ ๋งฅ๋ฝ์ ์ดํดํ๋ ๋ฅ๋ ฅ์ AI ์์คํ
์ด ์ด์ํ๊ฑฐ๋ ๋๋ก๋ ์ ํดํ ๋ฐฉ์์ผ๋ก ๋ฐ๋ณตํด์ ์คํจํ๋ ๋ฏธ๊ฐ์ฒ ์์ญ์ผ๋ก ๋จ์ ์๋ค. Grice์ ํ๋ ฅ ์๋ฆฌ(Cooperative Principle)์ ๊ทธ ๊ฒฉ๋ฅ ๋ค(์์ ๊ฒฉ๋ฅ , ์ง์ ๊ฒฉ๋ฅ , ๊ด๋ จ์ฑ์ ๊ฒฉ๋ฅ , ํ๋์ ๊ฒฉ๋ฅ ), ๊ทธ๋ฆฌ๊ณ ํํ ์ด๋ก (speech act theory)์ AI ์์คํ
์ด ์ง์ ์ผ๋ก ๋ํ์ ์ฐธ์ฌํ๋์ง ์๋๋ฉด ๋จ์ํ ์ฐธ์ฌ๋ฅผ ๋ชจ๋ฐฉํ๋์ง๋ฅผ ํ๊ฐํ๋ ์ด๋ก ์ ํ์ ์ ๊ณตํ๋ค.
์ ์ค์ํ๊ฐ
๋ํํ AI ์์คํ
์ ํ์ฉ์ ์คํจ๊ฐ ์ค์ง์ ์ธ ๊ฒฐ๊ณผ๋ฅผ ์ด๋ํ๋ ๋งฅ๋ฝ์์ ์ฌ์ฉ๋๊ณ ์๋ค. ํ์๊ฐ ๋ด๋ดํ๊ฒ "๊ด์ฐฎ์์"๋ผ๊ณ ๋งํ ๋ ๊ทธ๊ฒ์ ๋ฌธ์ ๊ทธ๋๋ก ๋ฐ์๋ค์ด๋ ์๋ฃ ์ฑ๋ด์ ์ค์ํ ์ฆ์์ ๋์น ์ ์๋ค. "๋ด๋น์๋ฅผ ์ฐ๊ฒฐํด ์ค ์ ์๋์?"๋ผ๋ ์ง๋ฌธ์ ์ค์ ๋ก ์ฐ๊ฒฐ์ ํ์ง ์๊ณ "๋ค, ๊ฐ๋ฅํฉ๋๋ค"๋ผ๊ณ ๋ง ๋ตํ๋ ๊ณ ๊ฐ ์๋น์ค ๋ด์ ๊ฐ์ ์์ฒญ์ ํ์ฉ๋ก ์ ์๋ฐํ๋ค. ์ ์ ๋ ์ ๋ฐํด์ง๋ ํจ์๋ฅผ ํตํ ๋ํ์ ๊ณ ์กฐ๋ฅผ ๊ฐ์งํ์ง ๋ชปํ๋ ๊ฐ์ ์ง์ ์ฑ๋ด์ ์ ์ ๊ฑด๊ฐ ์๊ธฐ๋ฅผ ์
ํ์ํฌ ์ ์๋ค. ๋ํํ AI๊ฐ ์ ๋ณด ๊ฒ์์์ ์ง์ ํ ์ํธ์์ฉ์ผ๋ก ๋์๊ฐ๋ฉด์, ํ์ฉ์ ์ญ๋์ ์ ํ ์ฌํญ์ด ์๋๋ผ ํ์ ์๊ฑด์ด ๋์๋ค.
์ธ์ดํ์ ๊ด์ ์์ ๋ณผ ๋, AI ์์คํ
์ ํ์ฉ ์ด๋ก ์ ๊ฒ์ฆํ๋ ๋
๋ณด์ ์ธ ์คํ ํ๊ฒฝ์ ์ ๊ณตํ๋ค. ํ
์คํธ ํจํด๋ง์ ์ฒ๋ฆฌํ๋ ์์คํ
์ด ํ์ฉ์ ํ๋์ ๊ทผ์ฌํ ์ ์๋ค๋ฉด, ์ด๋ ํ์ฉ์ ์ญ๋์ด ๋ฌด์์ ํ์๋ก ํ๋์ง์ ๊ดํ ์ด๋ก ์ ์ ์ฝํ๋ค. ๊ทธ๋ ์ง ๋ชปํ๋ค๋ฉด, ๊ตฌ์ฒด์ ์ธ ์คํจ ์์์ ํ์ฉ์ ์ฒ๋ฆฌ์ ์ด๋ค ์ธก๋ฉด์ด ํจํด ๋งค์นญ์ผ๋ก๋ ํ์๋ ์ ์์ผ๋ฉฐ ์ง์ ํ ์ฌํ์ ์ธ์ง๋ฅผ ํ์๋ก ํ๋์ง๋ฅผ ๋๋ฌ๋ธ๋ค.
์ฐ๊ตฌ ๋ด์ฉ
ํํ ์ด๋ก ์ ๊ทผ๊ฑฐํ ์ฑ๋ด ํ๊ฐ
Aziz(2025)๋ AI ์ฑ๋ด์ด ํํ ์ด๋ก ๊ณผ Grice์ ํ๋ ฅ ์๋ฆฌ๋ฅผ ๋ฐ๋ฅด๋์ง๋ฅผ ์ฒด๊ณ์ ์ผ๋ก ํ๊ฐํ๋ค. ์ด ์ฐ๊ตฌ๋ AI๊ฐ ์์ฑํ ๋ํ๋ฅผ ๋์์ผ๋ก ๊ฐ Grice ๊ฒฉ๋ฅ ์ ์ค์ ์ฌ๋ถ์ ๋ฐํ์๋ฐํ์(illocutionary act)(๋จ์ธ, ์์ฒญ, ์ฝ์, ์ฌ๊ณผ)์ ์ ์ ํ ์ํ ์ฌ๋ถ๋ฅผ ๋ถ์ํ๋ค. ์ฐ๊ตฌ ๊ฒฐ๊ณผ๋ ์ผ๊ด๋ ํจํด์ ๋ณด์ฌ์ค๋ค. ์ฑ๋ด์ ๋์ฒด๋ก ์ง์ ๊ฒฉ๋ฅ (๊ทผ๊ฑฐ ์๋ ๋ด์ฉ์ ์ง์ ํ์ง ์์)๊ณผ ํ๋์ ๊ฒฉ๋ฅ (ํฉ๋ฆฌ์ ์ผ๋ก ๋ช
๋ฃํจ)์ ์ค์ํ์ง๋ง, ์์ ๊ฒฉ๋ฅ (๋๋ฌด ๋ง๊ฑฐ๋ ๋๋ฌด ์ ์ ์ ๋ณด ์ ๊ณต)๊ณผ ๊ด๋ จ์ฑ์ ๊ฒฉ๋ฅ (๊ด๋ จ ์๋ ๋ถ์ฐ ์ค๋ช
ํฌํจ)์ ๋น๋ฒํ ์๋ฐํ๋ค. ํํ์ ์ธก๋ฉด์์ ์ฑ๋ด์ ์ง์ ํํ์ ์ ๋ฅํ๊ฒ ์ํํ์ง๋ง, "์ฐฝ๋ฌธ ์ข ๋ซ์ ์ค ์ ์๋์?"๊ฐ ๋ฅ๋ ฅ์ ๋ํ ์ง๋ฌธ์ด ์๋๋ผ ์์ฒญ์ผ๋ก ๊ธฐ๋ฅํ๋ ๊ฒ์ฒ๋ผ ํ๋ฉด ํ์์ด ์๋๋ ๊ธฐ๋ฅ๊ณผ ๋ค๋ฅธ ๊ฐ์ ํํ์์๋ ์ด๋ ค์์ ๊ฒช๋๋ค.
์ธ๊ฐ-AI ์ํธ์์ฉ์์์ ๋ํ์ ํจ์
Salman๊ณผ Matrood(2025)๋ ๋ํ ํจ์ถ(conversational implicature), ์ฆ ๋ช
์์ ์ผ๋ก ์ธ๊ธ๋์ง ์๊ณ ์์๋ ์๋ฏธ๊ฐ ์ธ๊ฐ-AI ์ํธ์์ฉ์์ ์ด๋ป๊ฒ ๊ธฐ๋ฅํ๋์ง๋ฅผ ๊ฒํ ํ๋ค. ๊ทธ๋ค์ ๋ถ์์ AI ์์คํ
์ด ์ธ ๊ฐ์ง ์ ํ์ ํจ์ถ์ ์ฒ๋ฆฌํ๋ ๋ฐ ํนํ ์ด๋ ค์์ ๊ฒช๋๋ค๋ ์ ์ ๋ฐํ๋ค: ์ฒ๋ ํจ์ถ(scalar implicature)(์: "์ผ๋ถ ํ์๋ค์ด ํต๊ณผํ๋ค"๊ฐ "๋ชจ๋ ํ์๋ค์ด ํต๊ณผํ ๊ฒ์ ์๋๋ค"๋ฅผ ํจ์ถํ๋ ๊ฒฝ์ฐ), ํน์ํ๋ ๋ํ ํจ์ถ(particularized conversational implicature)(ํน์ ๋งฅ๋ฝ์์ ๋์ถ๋ ์๋ฏธ), ๊ทธ๋ฆฌ๊ณ ๋ฐ์ด์ ํจ์ถ(ironic implicature)(ํจ์ถ๋ ์๋ฏธ๊ฐ ๋ฌธ์์ ์๋ฏธ์ ๋ฐ๋์ธ ๊ฒฝ์ฐ)์ด ๊ทธ๊ฒ์ด๋ค. ์ด ์ฐ๊ตฌ๋ ๊ทผ๋ณธ์ ์ธ ๋น๋์นญ์ฑ์ ํ์ธํ๋ค: ์ธ๊ฐ ์ฌ์ฉ์๋ AI์ ๋ํํ ๋ ์์ฐ์ค๋ฝ๊ฒ ํจ์ถ์ ์์ฑํ๋ฉฐ ์ธ๊ฐ ๋ํ ์๋์๊ฒ ๋ฐ๋ ๊ฒ๊ณผ ๋์ผํ ํ์ฉ๋ก ์ ์ฒ๋ฆฌ๋ฅผ ๊ธฐ๋ํ์ง๋ง, AI ์์คํ
์ ์ด๋ฌํ ๋ฐํ๋ฅผ ์ฃผ๋ก ๋ฌธ์์ ์์ค์์ ์ฒ๋ฆฌํ๋ค. ์ด ๋น๋์นญ์ฑ์ ์ธ๊ฐ-AI ๋ํ์์ ์์ฌ์ํต ์ค๋ฅ์ ์ฃผ์ ์์ธ์ด๋ค.
์ฒ๋ ํจ์ถ์ ๊ณ์ฐ ๋ชจ๋ธ๋ง
Li et al.(2024)์ ๋ฒ ์ด์ง์(Bayesian) ๋ฐฉ๋ฒ์ ์ฌ์ฉํ์ฌ ์ฒ๋ ํจ์ถ์ ํ์์ ๊ณ์ฐ ๋ชจ๋ธ์ ๊ฐ๋ฐํ๊ณ , ์ 1์๋ฆฌ๋ก๋ถํฐ ์ฒ๋ ํจ์ถ์ ๋์ถํ ์ ์๋ ์๊ท๋ชจ ๋ํ ์์คํ
์ ๊ตฌํํ๋ค. ์ด๋ค์ ์ ๊ทผ ๋ฐฉ์์ ์ฒ๋ ํจ์ถ์ ํ๋ฅ ์ ์ถ๋ก ๋ฌธ์ ๋ก ์ทจ๊ธํ๋ค: ํ์๊ฐ ๋ ๊ฐํ ํํ(์: "all")์ด ๊ฐ๋ฅํ ์ํฉ์์ ๋ ์ฝํ ํํ(์: "some")์ ์ ํํ๋ค๋ฉด, ์ฒญ์๋ ๋ ๊ฐํ ํํ์ด ์ ์ฉ๋์ง ์๋๋ค๊ณ ์ถ๋ก ํ๋ค. ๋ฒ ์ด์ง์ ํ๋ ์์ํฌ๋ ํ์์ ์ ํ์ ์ธ๊ณ์ ์ํ์ ์์ฌ์ํต ๋ชฉํ์ ํจ์๋ก ๋ชจ๋ธ๋งํจ์ผ๋ก์จ ์ด ์ถ๋ก ์ ์ ๋ํํ๋ค. ์ด ์์คํ
์ ์ ํ๋ ์์ญ์์ ์๋ํ์ง๋ง, ์์น์ ๊ณ์ฐ ํ์ฉ๋ก (computational pragmatics)์ด ์คํ ๊ฐ๋ฅํ๋ฉฐ ์์ํ๊ฒ ๋ฌธ์์ ์ธ ์ฒ๋ฆฌ๋ณด๋ค ๋ ์ ํํ ํด์์ ์ฐ์ถํจ์ ๋ณด์ฌ์ค๋ค.
ํจ์ถ ์ฒ๋ฆฌ์์์ ๊ฐ์ฑ
Li์ Xu(2025)๋ ๋ํ ํจ์ถ์์ ๊ฐ์ฑ์ ํ์งํ๋ ๊ณ์ฐ ํ์ฉ๋ก ์ ์ ๊ทผ ๋ฐฉ์์ ๊ฐ๋ฐํจ์ผ๋ก์จ ํ์ฉ๋ก ๊ณผ ๊ฐ์ฑ ๋ถ์(sentiment analysis)์ ์ฐ๊ฒฐํ๋ค. ์ด๋ค์ ํต์ฌ ํต์ฐฐ์ ๋ฐํ์ ๊ฐ์ฑ์ด ๋ฌธ์์ ๋ด์ฉ๋ณด๋ค ํจ์ถ ์์ ๋ด๊ฒจ ์๋ ๊ฒฝ์ฐ๊ฐ ๋ง๋ค๋ ๊ฒ์ด๋ค: "๊ทธ๊ฒ์ ํฅ๋ฏธ๋ก์ด ์ ์์ด๋ค์"๋ ๋ํ ๋งฅ๋ฝ์ ๋ฐ๋ผ ์ง์ ์ผ๋ก ๊ธ์ ์ ์ด๊ฑฐ๋ ๊ทน๋๋ก ๋ฌด์ํ๋ ํํ์ด ๋ ์ ์๋ค. ์ด ์ฐ๊ตฌ๋ ์๋ต ๊ฐ์ฑ๊ณผ ํจ์ถ ์ ํ ๊ฐ์ ๊ด๊ณ๋ฅผ ํ์ํํ๋ฉฐ, ์ดํ์ ๊ฐ์ฑ ์งํ์๋ง ์์กดํ๋ ๋์ ํ์ฉ๋ก ์ ๋งฅ๋ฝ์ ๋ช
์์ ์ผ๋ก ๋ชจ๋ธ๋งํ ๋ ๊ฐ์ฑ ๋ถ๋ฅ ์ ํ๋๊ฐ ์ ์๋ฏธํ๊ฒ ํฅ์๋จ์ ๋ณด์ฌ์ค๋ค. ์ด ์ฐ๊ตฌ๋ ๋์ฒด๋ก ๋
๋ฆฝ์ ์ผ๋ก ๋ฐ์ ํด ์จ ๋ NLP ํ์ ๋ถ์ผ์ธ ๊ฐ์ฑ ๋ถ์๊ณผ ๊ณ์ฐ ํ์ฉ๋ก ์ ์ฐ๊ฒฐํ๋ค.
ํ์ฌ AI ์์คํ
์ ํ์ฉ ๋ฅ๋ ฅ
<
| ํ์ฉ ํ์ | AI ๋ฅ๋ ฅ | ์คํจ ์์ | ํ์ํ ๋ฐ์ |
|---|
| ์ง์ ํํ(Direct speech acts) | ๊ฐํจ | ๋๋ฌธ ์คํจ | ์ผ๋ฐ์ ์ ํ์ ๋ํด ๋์ฒด๋ก ํด๊ฒฐ๋จ |
| ๊ฐ์ ํํ(Indirect speech acts) | ๋ณดํต | ์์ฒญ/์ง๋ฌธ์ ๋ฌธ์์ ํด์ | ๋งฅ๋ฝ ์์กด์ ์๋ ์ธ์ |
| ์ฒ๋ ํจ์ถ(Scalar implicature) | ๋ฎ์-๋ณดํต | "some โ all" ์ถ๋ก ๋๋ฝ | ํ์์ ํ์ฉ ์ถ๋ก |
| ํน์ํ๋ ํจ์ถ(Particularized implicature) | ๋ฎ์ | ๋งฅ๋ฝ์ ๊ณ ๋ คํ์ง ์๋ ์ฒ๋ฆฌ | ํ๋ถํ ์ํฉ ๋ชจ๋ธ๋ง |
| ๋ฐ์ด ๋ฐ ๋น์ ๋(Irony and sarcasm) | ๋ฎ์ | ๋ฌธ์์ ํด์ | ํ๋ ๋ฐ ์ฌํ์ ๋งฅ๋ฝ ๋ชจ๋ธ๋ง |
| ์ ์ (Presupposition) | ๋ณดํต | ์์ฉ ๋๋ ๋ฐ๋ฐ ์คํจ | ๊ณตํต ๊ธฐ๋ฐ(common ground) ์ถ์ |
| ๊ณต์ ์ ๋ต(Politeness strategies) | ๋ณดํต | ์ง๋์น๊ฒ ์ง์ ์ ์ด๊ฑฐ๋ ํ์์ | ๋ฌธํ์ ํ์ฉ ๋ฅ๋ ฅ |
์ฃผ๋ชฉํ ์ฌํญ
๊ฐ์ฅ ์ ๋งํ ๋ฐฉํฅ์ ํ์ฉ๋ก ์ ๋ฅ๋ ฅ์ด ๊ท๋ชจ์ ๋ถ์ฐ๋ฌผ๋ก ์์ฐ์ค๋ฝ๊ฒ ๋ํ๋๊ธฐ๋ฅผ ๊ธฐ๋ํ๋ ๊ฒ์ด ์๋๋ผ, ํ์ฉ๋ก ์ ์ด๋ก ์ LLM ํ๋ จ ๋ฐ ํ๊ฐ์ ํตํฉํ๋ ๊ฒ์ด๋ค. ํน์ ํ์ฉ๋ก ์ ํ์์ ๊ฒ์ฆํ๋ ๋ฒค์น๋งํฌ ๋ชจ์(BIG-Bench ํ์ฉ๋ก ๊ณผ์ , Pragmatic Understanding ๋ฒค์น๋งํฌ)์ ์ง์ ์ํฉ์ ์ฒด๊ณ์ ์ธ ์ธก์ ์ ๊ฐ๋ฅํ๊ฒ ํ๊ณ ์๋ค. AI์์ ๋ง์ ์ด๋ก (theory-of-mind) ๋ฅ๋ ฅ์ ๊ฐ๋ฐโ์ฆ, ์์คํ
์ด ๋ํ ์๋๋ฐฉ์ด ๋ฌด์์ ์๊ณ , ๋ฏฟ๊ณ , ์๋ํ๋์ง๋ฅผ ๋ชจ๋ธ๋งํ ์ ์๊ฒ ํ๋ ๊ฒโ์ ์ง์ ํ ํ์ฉ๋ก ์ ๋ฅ๋ ฅ์ ์ ์ ์กฐ๊ฑด์ด๋ค. ํจ์ถ ๊ณ์ฐ์ ๊ทผ๋ณธ์ ์ผ๋ก ํ์์ ์ ์ ์ํ์ ๋ํ ์ถ๋ก ์ ์๊ตฌํ๊ธฐ ๋๋ฌธ์ด๋ค. ํ์ฌ์ ํธ๋์คํฌ๋จธ(transformer) ์ํคํ
์ฒ๊ฐ ์ด๋ฌํ ์ข
๋ฅ์ ์ถ๋ก ์ ์ง์ํ ์ ์๋์ง, ์๋๋ฉด ์๋ก์ด ์ํคํ
์ฒ๊ฐ ํ์ํ์ง๋ AI ๋ถ์ผ์์ ๊ฐ์ฅ ์ค์ํ ๋ฏธํด๊ฒฐ ๋ฌธ์ ์ค ํ๋๋ก ๋จ์ ์๋ค.
ORAA ResearchBrain์ ์ฌ์ฉํ์ฌ ๊ด๋ จ ์ฐ๊ตฌ๋ฅผ ํ์ํ๋ผ.
References (4)
[1] Aziz, A.A. (2025). AI and Pragmatics: Do Chatbots Follow Speech Acts & Maxims? Wasit J. for Humanities, 21(3).
[2] Salman, Y. & Matrood, D. (2025). Conversational Implicature in Human-AI Interactions. FGR, 1(3).
[3] Li, X. & Xu, K. (2025). Sentiment Analysis of Conversational Implicature: A Computational Pragmatics Approach. Applied Artificial Intelligence, 39.
[4] Li, X., Yin, X., & Xu, K. (2024). A Model of Conversational Scalar Implicature in Computational Pragmatics. Proc. PRML 2024, IEEE.