Corpus ID: 19647037

Neural Joke Generation

@inproceedings{Ren2017NeuralJG,
  title={Neural Joke Generation},
  author={He Ren and Quan Yang},
  year={2017}
}
Humor generation is a very hard problem in the area of computational humor. In this paper, we present a joke generation model based on neural networks. The model can generate a short joke relevant to the topic that the user specifies. Inspired by the architecture of neural machine translation and neural image captioning, we use an encoder for representing user-provided topic information and an RNN decoder for joke generation. We trained the model by short jokes of Conan O’Brien with the help of… Expand
JokeR: A Recurrent Joker
Generating jokes is a challenging and understudied task of Natural Language Processing. A computer that intends and succeeds to generate jokes could be deemed artificially intelligent. We present aExpand
Knowledge Amalgam: Generating Jokes and Quotes Together
TLDR
This paper presents a controlled Long Short-Term Memory (LSTM) architecture which is trained with categorical data like jokes and quotes together by passing category as an input along with the sequence of words. Expand
A Survey on Approaches to Computational Humor Generation
We provide a comprehensive overview of existing systems for the computational generation of verbal humor in the form of jokes and short humorous texts. Considering linguistic humor theories, weExpand
A Survey of the Usages of Deep Learning for Natural Language Processing
TLDR
An introduction to the field and a quick overview of deep learning architectures and methods is provided and a discussion of the current state of the art is provided along with recommendations for future research in the field. Expand

References

SHOWING 1-10 OF 14 REFERENCES
Unsupervised joke generation from big data
TLDR
This work presents a model that uses large amounts of unannotated data to generateI like my X like I like my Y, Z jokes, where X, Y, and Z are variables to be filled in, which is, to the best of the knowledge, the first fully unsupervised humor generation system. Expand
Show and tell: A neural image caption generator
TLDR
This paper presents a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. Expand
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation
TLDR
Qualitatively, the proposed RNN Encoder‐Decoder model learns a semantically and syntactically meaningful representation of linguistic phrases. Expand
Sequence to Sequence Learning with Neural Networks
TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. Expand
Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus
TLDR
The 30M Factoid Question-Answer Corpus is presented, an enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions. Expand
Generating Text with Recurrent Neural Networks
TLDR
The power of RNNs trained with the new Hessian-Free optimizer by applying them to character-level language modeling tasks is demonstrated, and a new RNN variant that uses multiplicative connections which allow the current input character to determine the transition matrix from one hidden state vector to the next is introduced. Expand
Neural Machine Translation by Jointly Learning to Align and Translate
TLDR
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. Expand
GloVe: Global Vectors for Word Representation
TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure. Expand
Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network
TLDR
A new part-of-speech tagger is presented that demonstrates the following ideas: explicit use of both preceding and following tag contexts via a dependency network representation, broad use of lexical features, and effective use of priors in conditional loglinear models. Expand
Incorporating Copying Mechanism in Sequence-to-Sequence Learning
TLDR
This paper incorporates copying into neural network-based Seq2Seq learning and proposes a new model called CopyNet with encoder-decoder structure which can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence. Expand
...
1
2
...