The 6th Workshop on Vision and Language (VL’17) at EACL 2016

Yannick Versley News

The 6th Workshop on Vision and Language (VL’17)

At EACL 2017 in Valencia, Spain

First Call for Papers

Computational vision-language integration is commonly taken to mean the process of associating visual and corresponding linguistic pieces of information. Fragments of natural language, in the form of tags, captions, subtitles, surrounding text or audio, can aid the interpretation of image and video data by adding context or disambiguating visual appearance. Labeled images are essential for training object or activity classifiers. Visual data can help resolve challenges in language processing such as word sense disambiguation, language understanding, machine translation and speech recognition. Sign language and gestures are languages that require visual interpretation.
Studying language and vision together can also provide new insight into cognition and universal representations of knowledge and meaning, the focus of researchers in these areas is increasingly turning towards models for grounding language in action and perception. There is growing interest in models that are capable of learning from, and exploiting, multi-modal data, involving constructing semantic representations from both linguistic and visual or perceptual input.

The 6th Workshop on Vision and Language (VL’17) aims to address all the above, with a particular focus on the integrated modelling of vision and language. We welcome papers describing original research combining language and vision. To encourage the sharing of novel and emerging ideas we also welcome papers describing new datasets, grand challenges, open problems, benchmarks and work in progress as well as survey papers.

Topics of interest include (in alphabetical order), but are not limited to:
* Computational modelling of human vision and language
* Computer graphics generation from text
* Cross-lingual image captioning
* Detection/Segmentation by referring expressions
* Human-computer interaction in virtual worlds
* Human-robot interaction
* Image and video description and summarisation
* Image and video labelling and annotation
* Image and video retrieval
* Language-driven animation
* Machine translation with visual enhancement
* Medical image processing
* Models of distributional semantics involving vision and language
* Multi-modal discourse analysis
* Multi-modal human-computer communication
* Multi-modal machine translation
* Multi-modal temporal and spatial semantics recognition and resolution
* Recognition of narratives in text and video
* Recognition of semantic roles and frames in text, images and video
* Retrieval models across different modalities
* Text-to-image generation
* Visual question answering / visual Turing challenge
* Visually grounded language understanding
* Visual storytellingAccepted technical submissions will be presented at the workshop as 20+5min oral presentations; poster submissions will be presented in the form of brief ’teaser’ presentations, followed by a poster presentation during the workshop poster session. Authors of longer technical papers will have the option of additionally presenting their work in poster form.

Paper Submission

Submissions should be up to 8 pages long plus references for long papers, and 4 pages long plus references for poster papers. Submissions should adhere to the EACL 2017 format (style files available, and should be in PDF format.

Please make your submission via the workshop submission pages: link will be provided in second call.

Important Dates

Nov 10, 2016: First Call for Workshop Papers
Dec 9, 2016: Second Call for Workshop Papers
Jan 16, 2017: Workshop Paper Due Date
Feb 11, 2017: Notification of Acceptance
Feb 21, 2017: Camera-ready papers due
April 4, 2017: VL’17 Workshop

Programme Committee

Raffaella Bernardi, University of Trento, Italy
Darren Cosker, University of Bath, UK
Aykut Erdem, Hacettepe University, Turkey
Jacob Goldberger, Bar Ilan University, Israel
Jordi Gonzalez, Autonomous University of Barcelona, Spain
Frank Keller, University of Edinburgh, UK
Douwe Kiela, University of Cambridge, UK
Adrian Muscat, University of Malta, Malta
Arnau Ramisa, IRI UPC Barcelona, Spain
Carina Silberer, University of Edinburgh, UK
Caroline Sporleder, Germany
Josiah Wang, University of Sheffield, UK
Further members t.b.c.


Anya Belz, University of Brighton, UK
Katerina Pastra, Cognitive Systems Research Institute (CSRI), Athens, Greece
Erkut Erdem, Hacettepe University, Turkey
Krystian Mikolajczyk, Imperial College London, UK


This Workshop is organised by European COST Action IC1307: The European Network on Integrating Vision and Language (iV&L Net)