会議情報
ACL 2026: Annual Meeting of the Association for Computational Linguistics
https://2026.aclweb.org/
提出日:
2026-01-05
通知日:
2026-04-04
会議日:
2026-07-02
場所:
San Diego, California, USA
年:
64
CCF: a   CORE: a*   QUALIS: a1   閲覧: 319409   追跡: 337   出席: 34

論文募集
Overview

ACL 2026 invites the submission of long and short papers featuring substantial, original, and unpublished research in all aspects of Computational Linguistics and Natural Language Processing. ACL 2026 has a goal of a diverse technical program—in addition to traditional research results, papers may contribute negative findings, survey an area, announce the creation of a new resource, argue a position, report novel linguistic insights derived using existing computational techniques, and reproduce, or fail to reproduce, previous results. As in recent years, some of the presentations at the conference will feature papers accepted by the Transactions of the ACL (TACL) and the Computational Linguistics (CL) journals.

Papers submitted to ACL 2026, but not selected for the main conference, will also automatically be considered for publication in the Findings of the Association of Computational Linguistics.

Paper Submission

Papers may be submitted to the ARR 2025 October cycle and ARR 2026 January cycle. Papers that have already received reviews and a meta-review from ARR from earlier cycles may be committed to ACL 2026 via the conference commitment site (not available yet). If you intend to commit to ACL 2026 and need an invitation letter for visas please fill out the visa request form as soon as possible. For additional queries, contact the visa chairs at acl-2026-visa-chairs@googlegroups.com.

Submission

ACL 2026 aims to have a broad technical program. Relevant topics for the conference include, but are not limited to, the following areas (in alphabetical order):

    Safety and Alignment in LLMs
    AI/LLM Agents
    Human-AI Interaction/Cooperation
    Retrieval-Augmented Language Models
    Mathematical, Symbolic, and Logical Reasoning in NLP
    Computational Social Science, Cultural Analytics, and NLP for Social Good
    Code Models
    Interpretability and Analysis of Models for NLP
    LLM Efficiency
    Generalizability and Transfer
    Dialogue and Interactive Systems
    Discourse, Pragmatics, and Reasoning
    Low-resource Methods for NLP
    Ethics, Bias, and Fairness
    Natural Language Generation
    Information Extraction and Retrieval
    Linguistic theories, Cognitive Modeling and Psycholinguistics
    Machine Translation
    Multilinguality and Language Diversity
    Multimodality and Language Grounding to Vision, Robotics and Beyond
    Neurosymbolic approaches to NLP
    Phonology, Morphology and Word Segmentation
    Question Answering
    Resources and Evaluation
    Semantics: Lexical, Sentence-level Semantics, Textual Inference and Other areas
    Sentiment Analysis, Stylistic Analysis, and Argument Mining
    Speech Processing and Spoken Language Understanding
    Summarization
    Hierarchical Structure Prediction, Syntax, and Parsing
    NLP Applications
    Clinical and Biomedical Applications
    Financial Applications and Time Series
    Special Theme: Explainability of NLP Models

ACL 2026 Theme Track: Explainability of NLP

Following the success of the ACL 2020-2024 Theme tracks, we are happy to announce that ACL 2026 will have a new theme with the goal of reflecting and stimulating discussion about the current state of development of the field of NLP.

Explainability refers to the methods and techniques aimed at making the internal decision-making processes of complex NLP models, such as large language models, transparent and understandable to humans. It moves beyond treating models as “black boxes” whose predictions are accepted on faith, and instead seeks to uncover the reasoning behind specific outputs. Explainability is foundational to building trust, ensuring fairness, and facilitating responsible deployment. By revealing a model’s potential reliance on spurious correlations or societal biases, explainability allows developers to diagnose errors, improve model robustness, and provide accountability, which is especially critical in high-stakes domains like healthcare, finance, and law where understanding the “why” behind a decision is as crucial as the decision itself.

The theme track invites empirical and theoretical work as well as surveys and position papers reflecting on the Explainability of NLP Models. Possible topics of discussion include (but are not limited to) the following:

    How do explainability methods need to be adapted for different model architectures? Can we develop a unified framework to evaluate explanations across these architectures?
    How can we rigorously and quantitatively evaluate the quality of an explanation? What metrics can reliably measure the faithfulness (accuracy of the model’s reasoning) and plausibility (human-perceived reasonableness) of an explanation?
    Can explanations be used to reliably detect when a model is making a biased prediction based on sensitive attributes? How can input-based explanations help mitigate social biases during model training?
    Can we use explanations to systematically find and fix problems in the training data itself, such as spurious correlations or annotation errors? How can explainability facilitate a human-in-the-loop process for iterative data refinement?
    Can we identify specific directions, mechanisms, patterns, or “knobs” within a model’s internal activations that control high-level behaviors like abstaining from unanswerable questions? Can we design models that are inherently more interpretable?

Note that this track is distinct from the “Interpretability and analysis of models”. Papers submitted to the special theme should focus on understanding the internal workings of the model.

The theme track submissions can be either long or short. We anticipate having a special session for this theme at the conference and a Thematic Paper Award in addition to other categories of awards.
最終更新 Dou Sun 2025-10-28
合格率
時間提出受け入れ受け入れ(%)
20258360169920.3%
2024200840520.2%
201457111219.6%
201366217426.3%
201257214725.7%
関連会議