• Corpus ID: 19647037

Neural Joke Generation

@inproceedings{Ren2017NeuralJG,
  title={Neural Joke Generation},
  author={He Ren and Quan Yang},
  year={2017}
}
Humor generation is a very hard problem in the area of computational humor. In this paper, we present a joke generation model based on neural networks. The model can generate a short joke relevant to the topic that the user specifies. Inspired by the architecture of neural machine translation and neural image captioning, we use an encoder for representing user-provided topic information and an RNN decoder for joke generation. We trained the model by short jokes of Conan O’Brien with the help of… 

Figures and Tables from this paper

JokeR: A Recurrent Joker

TLDR
A variety of word-level models are implemented to tackle parts of the joke-generation problem, namely text generation and joke classification and Ideally, merging these steps will allow for a model to write joke candidates, that are then pruned by a well-trained classifier.

Knowledge Amalgam: Generating Jokes and Quotes Together

TLDR
This paper presents a controlled Long Short-Term Memory (LSTM) architecture which is trained with categorical data like jokes and quotes together by passing category as an input along with the sequence of words.

Can Language Models Make Fun? A Case Study in Chinese Comical Crosstalk

Language is the principal tool for human communication, in which humor is one of the most attractive parts. Producing natural language like humans using computers, a.k.a, Natural Language Generation

Humor Generation and Detection in Code-Mixed Hindi-English

TLDR
Of the experimented approaches, an Attention Based Bi-Directional LSTM with converting parts of text on a word2vec embedding gives the best results by generating 74.8% good jokes and IndicBERT used for detecting humor in code-mixed Hindi-English outperforms other humor detection methods with an accuracy of 96.98%.

A Survey on Approaches to Computational Humor Generation

TLDR
A comprehensive overview of existing systems for the computational generation of verbal humor in the form of jokes and short humorous texts and proposes two evaluation criteria: humorousness and complexity.

A Comprehensive Survey of Natural Language Generation Advances from the Perspective of Digital Deception

TLDR
This work offers a broad overview of the field of NLG with respect to its potential for misuse and outlines a proposed high-level taxonomy of the central concepts that constitute NLG, including the methods used to develop generalised NLG systems, the means by which these systems are evaluated, and the popular NLG tasks and subtasks that exist.

It Isn't Sh!tposting, It's My CAT Posting

TLDR
A novel architecture which can generate hilarious captions for a given input image by generating captions using CATNet using a pretrained CNN model and applies attention LSTM on it to generate normal caption.

A Survey of the Usages of Deep Learning for Natural Language Processing

TLDR
An introduction to the field and a quick overview of deep learning architectures and methods is provided and a discussion of the current state of the art is provided along with recommendations for future research in the field.

References

SHOWING 1-10 OF 14 REFERENCES

Unsupervised joke generation from big data

TLDR
This work presents a model that uses large amounts of unannotated data to generateI like my X like I like my Y, Z jokes, where X, Y, and Z are variables to be filled in, which is, to the best of the knowledge, the first fully unsupervised humor generation system.

Show and tell: A neural image caption generator

TLDR
This paper presents a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image.

Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation

TLDR
Qualitatively, the proposed RNN Encoder‐Decoder model learns a semantically and syntactically meaningful representation of linguistic phrases.

Sequence to Sequence Learning with Neural Networks

TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus

TLDR
The 30M Factoid Question-Answer Corpus is presented, an enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions.

Generating Text with Recurrent Neural Networks

TLDR
The power of RNNs trained with the new Hessian-Free optimizer by applying them to character-level language modeling tasks is demonstrated, and a new RNN variant that uses multiplicative connections which allow the current input character to determine the transition matrix from one hidden state vector to the next is introduced.

Neural Machine Translation by Jointly Learning to Align and Translate

TLDR
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.

GloVe: Global Vectors for Word Representation

TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.

Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network

TLDR
A new part-of-speech tagger is presented that demonstrates the following ideas: explicit use of both preceding and following tag contexts via a dependency network representation, broad use of lexical features, and effective use of priors in conditional loglinear models.

Incorporating Copying Mechanism in Sequence-to-Sequence Learning

TLDR
This paper incorporates copying into neural network-based Seq2Seq learning and proposes a new model called CopyNet with encoder-decoder structure which can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence.