• Corpus ID: 233296318

Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP

@article{Rozner2021DecryptingCC,
  title={Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP},
  author={Josh Rozner and Christopher Potts and Kyle Mahowald},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.08620}
}
Cryptic crosswords, the dominant crossword variety in the UK, are a promising target for advancing NLP systems that seek to process semantically complex, highly compositional language. Cryptic clues read like fluent natural language but are adversarially composed of two parts: a definition and a wordplay cipher requiring character-level manipulations. Expert humans use creative intelligence to solve cryptics, flexibly combining linguistic, world, and domain knowledge. In this paper, we make two… 

Tables from this paper

Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training

While simple character-level tokenization approaches still perform best on purely form-based tasks like string reversal, this method is superior for more complex tasks that blend form, meaning, and context, such as spelling correction in context and word search games.

Evaluating Human-Language Model Interaction

A framework, Human-AI Language-based Interaction Evaluation (H-LINE), is developed that expands non-interactive evaluation along three dimensions, capturing the interactive process, not only the output of the system, and notions of preference beyond quality.

Crossword Puzzle Resolution via Monte Carlo Tree Search

This paper is the first to model the crossword puzzle resolution problem as a Markov Decision Process and apply the MCTS to solve it, and can achieve an accuracy of 97% on the dataset.

What do tokens know about their characters and how do they know it?

The mechanisms through which PLMs acquire English-language character information during training are investigated and it is argued that this knowledge is acquired through multiple phenomena, including a systematic relationship between particular characters and particular parts of speech, as well as natural variability in the tokenization of related strings.

Automated Crossword Solving

The Berkeley Crossword Solver is presented, a state-of-the-art approach for automatically solving crossword puzzles that improves exact puzzle accuracy from 57% to 82% on crosswords from The New York Times and obtains 99.9% letter accuracy on themeless puzzles.

References

SHOWING 1-10 OF 42 REFERENCES

Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language

This work presents Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced, and on par with the accuracy of a rule-based clue solver.

“The Penny Drops”: Investigating Insight Through the Medium of Cryptic Crosswords

It is argued that the crossword paradigm overcomes many of the issues which beset other insight problems: for example, solution rates of cryptic crossword clues are high; new material can easily be commissioned, leading to a limitless pool of test items; and each puzzle contains clues resembling a wide variety of insight problem types, permitting a comparison of heterogeneous solving mechanisms within the same medium.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing

SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, finds that it is possible to achieve comparable accuracy to direct subword training from raw sentences.

Pun Generation with Surprise

An unsupervised approach to pun generation based on lots of raw (unhumorous) text and a surprisal principle is proposed, which posit that in a pun sentence, there is a strong association between the pun word and the distant context, but a strong associations between the alternativeword and the immediate context.

A probabilistic approach to solving crossword puzzles

Using the BNC to produce dialectic cryptic crossword clues

This paper describes an attempt to generate seemingly meaningful cryptic crossword clues without trying to analyse meaning but relying solely on word occurrence statistics. It is a continuation of a

Cryptic crossword clue interpreter

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.