• Publications
  • Influence
Optimizer Benchmarking Needs to Account for Hyperparameter Tuning
TLDR
Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, the results indicate that Adam is the most practical solution, particularly in low-budget scenarios.
Plug and Play Autoencoders for Conditional Text Generation
TLDR
Evaluations on style transfer tasks both with and without sequence-to-sequence supervision show that the proposed plug and play Emb2Emb method performs better than or comparable to strong baselines while being up to four times faster.
CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model
TLDR
A hybrid model that combines the strengths of CBOW and CMOW is proposed, which retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%.
Learning Entailment-Based Sentence Embeddings from Natural Language Inference
TLDR
This work proposes a simple interaction layer based on predefined entailment and contradiction scores applied directly to the sentence embeddings, which achieves results on natural language inference competitive with MLP-based models and directly represents the information needed for textual entailment.
Using Deep Learning for Title-Based Semantic Subject Indexing to Reach Competitive Performance to Full-Text
TLDR
This paper investigates the question how models obtained from training on increasing amounts of title training data compare to models fromTraining on a constant number of full-texts, and develops three strong deep learning classifiers and evaluates their performance on the two datasets.
Multi-Modal Adversarial Autoencoders for Recommendations of Citations and Subject Labels
TLDR
It is demonstrated that adversarial regularization consistently improves the performance of autoencoders for recommendation, and it is crucial to consider the semantics of item co-occurrence for the choice of an appropriate model when facing a new recommendation task.
Using Titles vs. Full-text as Source for Automated Semantic Document Annotation
TLDR
Across three of the four datasets, the performance of the classifications using only titles reaches over 90% of the quality compared to the performance when using the full-text.
On the Tunability of Optimizers in Deep Learning
TLDR
Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, it is found that Adam is the most tunable for the majority of problems, especially with a low budget for hyperparameter tuning.
Using Adversarial Autoencoders for Multi-Modal Automatic Playlist Continuation
TLDR
It is shown how multiple input modalities, such as the playlist titles as well as track titles, artists and albums, can be incorporated in the playlist continuation task.
Comparing Titles vs. Full-text for Multi-Label Classification of Scientific Papers and News Articles
TLDR
It is demonstrated that classifications solely using the documents' titles can be very good and very close to the classification results using full- Text, and that the best methods on titles are even better than several state-of-the-art methods on full-text.
...
...