ÍâÍøÌìÌÃ

General purpose models: large language models and beyond 2026 collection | Digital Discovery

Explore more:

Popular searches

Donate Join us

General purpose models: large language models and beyond 2026 collection

Submissions now open

Deadline: 06 August 2026
Guest Editors: Kevin Maik Jablonka, Friedrich Schiller University Jena
N M Anoop Krishnan, Indian Institute of Technology Delhi
Francesca Grisoni, Eindhoven University of Technology

Contributions are welcome in both the theory and applications of general-purpose models (GPMs)-LLMs and beyond. We define a GPM as a model pre-trained on a broad, heterogeneous corpus spanning multiple data modalities (e.g., text, images, graphs) or representations (e.g., common names, 3D coordinates, molecular images). GPMs can be applied to a wide spectrum of downstream tasks – spanning different objectives (classification, regression, generation, reasoning), input formats, and domains (from NLP to chemistry and vision) – with little or no task-specific fine-tuning. 

We are particularly interested in work that deepens our understanding of what enables broad capability and generalization, including rigorous benchmarking, careful experimental design, and principled analyses of model and agent behavior. We will consider methods ranging from near-term, practical systems to more conceptual advances, including architectures that move beyond today’s dominant transformer paradigm.

We encourage submissions on topics including, but by no means limited to:

  • Novel benchmarks and evaluation protocols for general-purpose capabilities (including robustness, generalization, and cross-domain transfer)
  • Careful ablation studies that yield actionable insight into what drives performance, scaling, and emergent behaviors
  • Novel training approaches, objectives, curricula, and data strategies (including alignment- and efficiency-oriented methods)
  • Agentic systems and setups, including well-controlled studies of tool use, planning, memory, autonomy, and safety/reliability under deployment constraints
  • Multimodal GPMs, spanning text, images, graphs, 3D/structured representations, and domain-specific modalities
  • Architectures beyond transformers, such as state-space models, diffusion-based text generation, and other emerging modeling paradigms

Digital Discovery

Impact factor

5.6 (2024)

First decision time (all)

40 days

First decision time (peer)

46 days

Editor-in-chief

Alán Aspuru-Guzik

Open access

Gold

About this journal