General purpose models: large language models and beyond 2026 collection
Submissions now open
| Deadline: | 06 August 2026 |
|---|---|
| Guest Editors: |
Kevin Maik Jablonka, Friedrich Schiller University Jena N M Anoop Krishnan, Indian Institute of Technology Delhi Francesca Grisoni, Eindhoven University of Technology |
|
Contributions are welcome in both the theory and applications of general-purpose models (GPMs)-LLMs and beyond. We define a GPM as a model pre-trained on a broad, heterogeneous corpus spanning multiple data modalities (e.g., text, images, graphs) or representations (e.g., common names, 3D coordinates, molecular images). GPMs can be applied to a wide spectrum of downstream tasks – spanning different objectives (classification, regression, generation, reasoning), input formats, and domains (from NLP to chemistry and vision) – with little or no task-specific fine-tuning. We are particularly interested in work that deepens our understanding of what enables broad capability and generalization, including rigorous benchmarking, careful experimental design, and principled analyses of model and agent behavior. We will consider methods ranging from near-term, practical systems to more conceptual advances, including architectures that move beyond today’s dominant transformer paradigm. We encourage submissions on topics including, but by no means limited to:
|
|
Digital Discovery
Impact factor
5.6 (2024)
First decision time (all)
40 days
First decision time (peer)
46 days
Editor-in-chief
Alán Aspuru-Guzik
Open access
Gold
Related pages
Publish with us
Get your work the international recognition that it deserves.
Our journals
We publish over 50 world-leading journals that span the core chemical sciences and related fields.
Sign up for journal email alerts
Get table of contents alerts and notifications about calls for papers, themed issues and more.