• Corpus ID: 246035722

Extending the Vocabulary of Fictional Languages using Neural Networks

@article{Zacharias2022ExtendingTV,
  title={Extending the Vocabulary of Fictional Languages using Neural Networks},
  author={Thomas Zacharias and Ashutosh Taklikar and Raja Giryes},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.07288}
}
Fictional languages have become increasingly popular over the recent years appearing in novels, movies, TV shows, comics, and video games. While some of these fictional languages have a complete vocabulary, most do not. We propose a deep learning solution to the problem. Using style transfer and machine translation tools, we generate new words for a given target fictional language, while maintaining the style of its creator, hence extending this language vocabulary. 

References

SHOWING 1-10 OF 31 REFERENCES
Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models
TLDR
This paper explores automated methods to transform text from modern English to Shakespearean English using an end to end trainable neural model with pointers to enable copy action and pre-train embeddings of words.
Harnessing Pre-Trained Neural Networks with Rules for Formality Style Transfer
TLDR
This work studies how to harness rules into a state-of-the-art neural network that is typically pretrained on massive corpora and achieves a new state- of- the-art on benchmark datasets.
Toward Controlled Generation of Text
TLDR
A new neural generative model is proposed which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures inGeneric generation and manipulation of text.
Constructed languages in the classroom
Constructed languages (purposefully invented languages like Esperanto and Klingon) have long captured the human imagination. They can also be used as pedagogical tools in the linguistics classroom to
Language Style Transfer from Sentences with Arbitrary Unknown Styles
TLDR
The effectiveness of the model is validated in three tasks: sentiment modification of restaurant reviews, dialog response revision with a romantic style, and sentence rewriting with a Shakespearean style.
Evaluating prose style transfer with the Bible
TLDR
This work identifies a high-quality source of aligned, stylistically distinct text in different versions of the Bible, and provides a standardized split, into training, development and testing data, of the public domain versions in their corpus.
Style Transfer from Non-Parallel Text by Cross-Alignment
TLDR
This paper proposes a method that leverages refined alignment of latent representations to perform style transfer on the basis of non-parallel text, and demonstrates the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.
Learning to Generate Reviews and Discovering Sentiment
TLDR
The properties of byte-level recurrent language models are explored and a single unit which performs sentiment analysis is found which achieves state of the art on the binary subset of the Stanford Sentiment Treebank.
Sequence to Sequence Learning with Neural Networks
TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation
TLDR
Qualitatively, the proposed RNN Encoder‐Decoder model learns a semantically and syntactically meaningful representation of linguistic phrases.
...
...