[ Skip to the content ]

Institute of Formal and Applied Linguistics

at Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic


[ Back to the navigation ]

Publication


Year 2018
Type poster/abstract/demo *
Status published
Language English
Author(s) Rosa, Rudolf Libovický, Jindřich Musil, Tomáš Mareček, David
Title Looking for linguistic structures in neural networks
Czech title Hledání lingvistických struktur v neuronových sítích
Publisher's city and country Genova, Italy
Month July
How published online
Supported by 2018-2020 GA18-02196S (Linguistic Structure Representation in Neural Networks)
Czech abstract Představujeme svůj projekt LSD, ve kterém se snažíme zjistit, zda neuronové sítě pracují s reprezentacemi podobnými klasickým lingvistickým strukturám.
English abstract In recent years, deep neural networks have achieved and surpassed state-of-the-art results in many tasks, including many natural language processing problems, such as machine translation, sentiment analysis, image captioning, etc.
Traditionally, solving these tasks relied on many intermediary steps, internally using explicit linguistic annotations and representations, such as part-of-speech tags, syntactic structures, semantic labels, etc. These smaller substeps were thought of as useful or even necessary to solve the larger and more complex tasks.
However, deep neural networks have made it possible to use end-to-end learning, where the network directly learns to produce the desired outputs from the inputs, without any explicit internal intermediary representations.
Nevertheless, the networks are structured in such a way that we can still think of them as using some intermediary representations of the inputs, although these are learned only implicitly. Some of the representations can be directly linked to certain parts of the input -- such as word embeddings corresponding to individual words -- others are linked to the inputs more vaguely, due to using recurrent units, attention, etc.
In our project, we are interested in investigating these internal representations, trying to see what information they capture, how they are structured, and what meaning we can assign to them. More specifically, we are currently trying to reliably determine to what extent neural networks seem to capture some basic linguistic notions, such as part of speech, in their various components -- encoder word embeddings, decoder word embeddings, encoder hidden states... We are also interested in how this depends on the task for which the network is trained -- language modelling (word2vec), machine translation, sentiment analysis... Ultimately, we are interested in the somewhat philosophical question of whether neural networks seem to understand language, or at least capture the meanings of sentences.
Specialization computer science ("informatika")
Confidentiality default – not confidential
Event DeepLearn summer school Open session
Open access yes
Creator: Common Account
Created: 7/13/18 7:37 PM
Modifier: Common Account
Modified: 9/25/18 12:09 PM
***

slidespublicpresentation-Rosa.pdfapplication/pdf
Content, Design & Functionality: ÚFAL, 2006–2016. Page generated: Mon Oct 15 18:22:53 CEST 2018

[ Back to the navigation ] [ Back to the content ]

100% OpenAIRE compliant