- This event has passed.
6th Workshop on Vision and Language (Call for papers)
The 6th Workshop on Vision and Language (VL’17)
At EACL’17 in Valencia, Spain
Call for Papers
Computational vision-language integration is commonly taken to mean the process of associating visual and corresponding linguistic pieces of information. Fragments of natural language, in the form of tags, captions, subtitles, surrounding text or audio, can aid the interpretation of image and video data by adding context or disambiguating visual appearance. Labeled images are essential for training object or activity classifiers. Visual data can help resolve challenges in language processing such as word sense disambiguation, language understanding, machine translation and speech recognition. Sign language and gestures are languages that require visual interpretation. Studying language and vision together can also provide new insight into cognition and universal representations of knowledge and meaning, the focus of researchers in these areas is increasingly turning towards models for grounding language in action and perception. There is growing interest in models that are capable of learning from, and exploiting, multi-modal data, involving constructing semantic representations from both linguistic and visual or perceptual input.
The 6th Workshop on Vision and Language (VL’17) aims to address all the above, with a particular focus on the integrated modelling of vision and language. We welcome papers describing original research combining language and vision. To encourage the sharing of novel and emerging ideas we also welcome papers describing new datasets, grand challenges, open problems, benchmarks and work in progress as well as survey papers.
Topics of interest include (in alphabetical order), but are not limited to:
- Computational modelling of human vision and language
- Computer graphics generation from text
- Cross-lingual image captioning
- Detection/Segmentation by referring expressions
- Human-computer interaction in virtual worlds
- Human-robot interaction
- Image and video description and summarisation
- Image and video labelling and annotation
- Image and video retrieval
- Language-driven animation
- Machine translation with visual enhancement
- Medical image processing
- Models of distributional semantics involving vision and language
- Multi-modal discourse analysis
- Multi-modal human-computer communication
- Multi-modal machine translation
- Multi-modal temporal and spatial semantics recognition and resolution
- Recognition of narratives in text and video
- Recognition of semantic roles and frames in text, images and video
- Retrieval models across different modalities
- Text-to-image generation
- Visual question answering / visual Turing challenge
- Visually grounded language understanding
- Visual storytelling
Accepted technical submissions will be presented at the workshop as 20+5min oral presentations; poster submissions will be presented in the form of brief ’teaser’ presentations, followed by a poster presentation during the workshop poster session. Authors of longer technical papers will have the option of additionally presenting their work in poster form.
Submissions should be up to 8 pages long plus references for long papers, and 4 pages long plus references for poster papers. Submissions should adhere to the EACL 2017 format (style files available http://eacl2017.org/index.php/calls/call-for-papers), and should be in PDF format.
Please make your submission via the workshop submission pages: link will be provided in second call.
- Nov 10, 2016: First Call for Workshop Papers
- Dec 9, 2016: Second Call for Workshop Papers
- Jan 16, 2017: Workshop Paper Due Date
- Feb 11, 2017: Notification of Acceptance
- Feb 21, 2017: Camera-ready papers due
- April 4, 2017: VL’17 Workshop
- Raffaella Bernardi, University of Trento, Italy
- Darren Cosker, University of Bath, UK
- Aykut Erdem, Hacettepe University, Turkey
- Jacob Goldberger, Bar Ilan University, Israel
- Jordi Gonzalez, Autonomous University of Barcelona, Spain
- Frank Keller, University of Edinburgh, UK
- Douwe Kiela, University of Cambridge, UK
- Adrian Muscat, University of Malta, Malta
- Arnau Ramisa, IRI UPC Barcelona, Spain
- Carina Silberer, University of Edinburgh, UK
- Caroline Sporleder, Germany
- Josiah Wang, University of Sheffield, UK
- Further members t.b.c.
- Anya Belz, University of Brighton, UK
- Katerina Pastra, Cognitive Systems Research Institute (CSRI), Athens, Greece
- Erkut Erdem, Hacettepe University, Turkey
- Krystian Mikolajczyk, Imperial College London, UK
This Workshop is organised by European COST Action IC1307: The European Network on Integrating Vision and Language (iV&L Net)
The explosive growth of visual and textual data (both on the World Wide Web and held in private repositories by diverse institutions and companies) has led to urgent requirements in terms of search, processing and management of digital content. Solutions for providing access to or mining such data depend on the semantic gap between vision and language being bridged, which in turn calls for expertise from two so far unconnected fields: Computer Vision (CV) and Natural Language Processing (NLP). The central goal of iV&L Net is to build a European CV/NLP research community, targeting 4 focus themes: (i) Integrated Modelling of Vision and Language for CV and NLP Tasks; (ii) Applications of Integrated Models; (iii) Automatic Generation of Image & Video Descriptions; and (iv) Semantic Image & Video Search. iV&L Net will organise annual conferences, technical meetings, partner visits, data/task benchmarking, and industry/end-user liaison. Europe has many of the world?s leading CV and NLP researchers. Tapping into this expertise, and bringing the collaboration, networking and community building enabled by COST Actions to bear, iV&L Net will have substantial impact, in terms of advances in both theory/methodology and real world technologies.