会议信息
ACL 2026: Annual Meeting of the Association for Computational Linguistics
https://2026.aclweb.org/
截稿日期:
2026-01-05
通知日期:
2026-04-04
会议日期:
2026-07-02
会议地点:
San Diego, California, USA
届数:
64
CCF: a   CORE: a*   QUALIS: a1   浏览: 1092025   关注: 340   参加: 35

征稿
Overview

ACL 2026 invites the submission of long and short papers featuring substantial, original, and unpublished research in all aspects of Computational Linguistics and Natural Language Processing. ACL 2026 has a goal of a diverse technical program—in addition to traditional research results, papers may contribute negative findings, survey an area, announce the creation of a new resource, argue a position, report novel linguistic insights derived using existing computational techniques, and reproduce, or fail to reproduce, previous results. As in recent years, some of the presentations at the conference will feature papers accepted by the Transactions of the ACL (TACL) and the Computational Linguistics (CL) journals.

Papers submitted to ACL 2026, but not selected for the main conference, will also automatically be considered for publication in the Findings of the Association of Computational Linguistics.

Paper Submission

Papers may be submitted to the ARR 2025 October cycle and ARR 2026 January cycle. Papers that have already received reviews and a meta-review from ARR from earlier cycles may be committed to ACL 2026 via the conference commitment site (not available yet). If you intend to commit to ACL 2026 and need an invitation letter for visas please fill out the visa request form as soon as possible. For additional queries, contact the visa chairs at acl-2026-visa-chairs@googlegroups.com.

Submission

ACL 2026 aims to have a broad technical program. Relevant topics for the conference include, but are not limited to, the following areas (in alphabetical order):

    Safety and Alignment in LLMs
    AI/LLM Agents
    Human-AI Interaction/Cooperation
    Retrieval-Augmented Language Models
    Mathematical, Symbolic, and Logical Reasoning in NLP
    Computational Social Science, Cultural Analytics, and NLP for Social Good
    Code Models
    Interpretability and Analysis of Models for NLP
    LLM Efficiency
    Generalizability and Transfer
    Dialogue and Interactive Systems
    Discourse, Pragmatics, and Reasoning
    Low-resource Methods for NLP
    Ethics, Bias, and Fairness
    Natural Language Generation
    Information Extraction and Retrieval
    Linguistic theories, Cognitive Modeling and Psycholinguistics
    Machine Translation
    Multilinguality and Language Diversity
    Multimodality and Language Grounding to Vision, Robotics and Beyond
    Neurosymbolic approaches to NLP
    Phonology, Morphology and Word Segmentation
    Question Answering
    Resources and Evaluation
    Semantics: Lexical, Sentence-level Semantics, Textual Inference and Other areas
    Sentiment Analysis, Stylistic Analysis, and Argument Mining
    Speech Processing and Spoken Language Understanding
    Summarization
    Hierarchical Structure Prediction, Syntax, and Parsing
    NLP Applications
    Clinical and Biomedical Applications
    Financial Applications and Time Series
    Special Theme: Explainability of NLP Models

ACL 2026 Theme Track: Explainability of NLP

Following the success of the ACL 2020-2024 Theme tracks, we are happy to announce that ACL 2026 will have a new theme with the goal of reflecting and stimulating discussion about the current state of development of the field of NLP.

Explainability refers to the methods and techniques aimed at making the internal decision-making processes of complex NLP models, such as large language models, transparent and understandable to humans. It moves beyond treating models as “black boxes” whose predictions are accepted on faith, and instead seeks to uncover the reasoning behind specific outputs. Explainability is foundational to building trust, ensuring fairness, and facilitating responsible deployment. By revealing a model’s potential reliance on spurious correlations or societal biases, explainability allows developers to diagnose errors, improve model robustness, and provide accountability, which is especially critical in high-stakes domains like healthcare, finance, and law where understanding the “why” behind a decision is as crucial as the decision itself.

The theme track invites empirical and theoretical work as well as surveys and position papers reflecting on the Explainability of NLP Models. Possible topics of discussion include (but are not limited to) the following:

    How do explainability methods need to be adapted for different model architectures? Can we develop a unified framework to evaluate explanations across these architectures?
    How can we rigorously and quantitatively evaluate the quality of an explanation? What metrics can reliably measure the faithfulness (accuracy of the model’s reasoning) and plausibility (human-perceived reasonableness) of an explanation?
    Can explanations be used to reliably detect when a model is making a biased prediction based on sensitive attributes? How can input-based explanations help mitigate social biases during model training?
    Can we use explanations to systematically find and fix problems in the training data itself, such as spurious correlations or annotation errors? How can explainability facilitate a human-in-the-loop process for iterative data refinement?
    Can we identify specific directions, mechanisms, patterns, or “knobs” within a model’s internal activations that control high-level behaviors like abstaining from unanswerable questions? Can we design models that are inherently more interpretable?

Note that this track is distinct from the “Interpretability and analysis of models”. Papers submitted to the special theme should focus on understanding the internal workings of the model.

The theme track submissions can be either long or short. We anticipate having a special session for this theme at the conference and a Thematic Paper Award in addition to other categories of awards.
最后更新 Dou Sun 在 2025-10-28
录取率
时间提交数录取数录取率(%)
20258360169920.3%
2024200840520.2%
2023387291023.5%
2022337870120.8%
2021335071021.2%
2020224857125.4%
2019174044725.7%
2018104525624.5%
201775119526%
201682523128%
201569217325%
201457111219.6%
201366217426.3%
201257214725.7%
201163416425.9%
201063816025.1%
200956912121.3%
200847011925.3%
200758813122.3%
200663014723.3%
20054237718.2%
20043488825.3%
20033607119.7%
20022566625.8%
20012606926.5%
20002677026.2%
19993208025%
199855013724.9%
19972648331.4%
最佳论文
时间最佳论文
2023From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
2023What the DAAM: Interpreting Stable Diffusion Using Cross Attention
2023Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest
2022KinyaBERT: a Morphology-aware Kinyarwanda Language Model
2022DiBiMT: A Novel Benchmark for Measuring Word Sense Disambiguation Biases in Machine Translation
2022Requirements and Motivations of Low-Resource Speech Synthesis for Language Revitalization
2022Learned Incremental Representations for Parsing
2021Vocabulary Learning via Optimal Transport for Neural Machine Translation
2020Beyond Accuracy: Behavioral Testing of NLP models with Checklist
2019Bridging the Gap between Training and Inference for Neural Machine Translation
2019OpenKiwi: An Open Source Framework for Quality Estimation
2019Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts
2019A Simple Theoretical Model of Importance for Summarization
2019Do you know that Florence is packed with visitors? Evaluating state-of-the-art models of speaker commitment
2019Zero-shot Word Sense Disambiguation using Sense Definition Embeddings
2019We need to talk about standard splits
2019Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems
2018Finding syntax in human encephalography with beam search
2018Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information
2018Let’s do it “again”: A First Computational Approach to Detecting Adverbial Presupposition Triggers
2018Know What You Don’t Know: Unanswerable Questions for SQuAD
2018‘Lighter’ Can Still Be Dark: Modeling Comparative Color Descriptions
2017Probabilistic Typology: Deep Generative Models of Vowel Inventories
2016Finding Non-Arbitrary Form-Meaning Systematicity Using String-Metric Learning for Kernel Regression
2015Learning Dynamic Feature Selection for Fast Sequential Prediction
2015Improving Evaluation of Machine Translation Quality Estimation
2014Fast and Robust Neural Network Joint Models for Statistical Machine Translation
2013Grounded Language Learning from Video Described with Sentences
2012Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing
2012String Re-writing Kernel
2011Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections
2010Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
2009Concise Integer Linear Programming Formulations for Dependency Parsing
2009K-Best A* Parsing
2009Reinforcement Learning for Mapping Instructions to Actions
2008A New String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model
2008Forest Reranking: Discriminative Parsing with Non-Local Features
2007Learning synchronous grammars for semantic parsing with lambda calculus
2006Semantic taxonomy induction from heterogenous evidence
2005A Hierarchical Phrase-Based Model for Statistical Machine Translation
2004Finding Predominant Word Senses in Untagged Text
2003Towards a Model of Face-to-Face Grounding
2003Accurate Unlexicalized Parsing
2002Discriminative Training and Maximum Entropy Models for Statistical Machine Translation
2001Immediate-Head Parsing for Language Models
2001Fast Decoding and Optimal Decoding for Machine Translation
相关会议
相关期刊