With the availability of large language data online, cross-linked lexical resources (such as BabelNet, Predicate Matrix and UBY) and semantically annotated corpora (SemCor, OntoNotes, etc.) more and more applications in NLP have started to exploit various semantic models. The semantic models have been created on the base of LSA, clustering, word embeddings, deep learning, neural networks, etc. and abstract logical forms, such as Minimal Recursion Semantics (MRS) or Abstract Meaning Representation (AMR) etc.
Additionally, the Linguistic Linked Open Data Cloud has been initiated (LLOD cloud) which interlinks linguistic data for improving the tasks of NLP. This cloud has been expanding enormously for the last four-five years. It includes corpora, lexicons, thesauri, knowledge bases of various kinds, organized around appropriate ontologies, such as LEMON. The semantic models behind the data organization as well as the representation of the semantic resources themselves are a challenge to the NLP community.
The NLP applications that extensively rely on the above discussed models include Machine Translation, Information Extraction, Question Answering, Text Simplification, etc.
The idea behind this Special Issue is to gather contributions on the creation, maintenance and usage of semantic models for specific NLP tasks. We are interested in both – supervised and unsupervised approaches as well as in models that are predefined in structured resources or extracted on-the-fly from unstructured data.
Topics of Interest Include but are not Limited to:
- Combining syntagmatic (corpus-based) and paradigmatic (lexicon-based) relations into semantic models
- Approaches to modeling semantically similarity and relatedness
- Design and application of distributional semantics models
- Graph-based semantic methods
- Shallow and deep semantic architectures, based on neural networks
- Integrating non-semantic linguistic knowledge into semantic models
- The applicability of the various logical semantic representations for NLP tasks
- Linking of linguistic resources through appropriate semantic models
- Presents original scientific papers to the subject areas among which: artificial intelligence, linguistic modelling, computer communication technologies, information technologies in education, etc.
- Has an SJR rank and is indexed in Elsevier SCOPUS, Thomson Reuters Emerging Sources Citation Index and other 20 databases
- Is an Open Access Journal
1 June 2017 - Deadline for submitting papers
31 July 2017 - Notification of acceptance
15 September 2017 – Deadline for final version of the papers
15 December 2017 - Printing the Special Issue
Papers are invited with length of 10 to 15 pages together with the references.
They should follow the authors’ instructions at the following link: http://www.cit.iit.bas.bg/cit_inst_authors.html.
The submissions have to be sent to the guest editors.
- Kiril Simov, Institute of Information and Communication Technologies at Bulgarian Academy of Sciences (kivs at bultreebank.org)
- Petya Osenova, Institute of Information and Communication Technologies at Bulgarian Academy of Sciences and Sofia University “St. Kl. Ohridski” (petya at bultreebank.org)
Guest Editorial Board
- Galia Angelova, IICT-BAS
- Antonio Branco, University of Lisbon
- Francis Bond, Nanyang Technological University
- Gosse Bouma, University of Groningen
- Aljoscha Burchardt, DFKI
- Nicoletta Calzolari, Institute for Computational Linguistics “A. Zampolli”
- Ann Copestake, University of Cambridge
- Thierry Declerck, DFKI
- Christiane D. Fellbaum, Princeton University
- Sandra Kübler, Indiana University
- Allesandro Lenci, University of Pisa
- John Mccrae, National University of Ireland Galway
- Ruslan Mitkov, University of Wolverhampton
- Preslav Nakov, Qatar Computing Research Institute, Qatar Foundation
- Maciej Piasecki, Wroclaw University of Science and Technology
- Piek Vossen, Vreje Universiteit, Amsterdam
- Dekai Wu, Hong Kong University of Science & Technology
The Special Issue is supported by ADISS Lab Ltd.
The Special Issue is partially supported by "Deep models of Semantic Knowledge" (DemoSem) DN02/12 national project.