DeepGen: Diverse Search Ad Generation and Real-Time Customization

@article{Golobokov2022DeepGenDS,
  title={DeepGen: Diverse Search Ad Generation and Real-Time Customization},
  author={Konstantin Golobokov and Junyi Chai and Victor Ye Dong and Mandy Gu and Bingyu Chi and Jie Cao and Yulan Yan and Yi Liu},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.03438}
}
We present DeepGen, a system deployed at web scale for automatically creating sponsored search advertisements (ads) for Bing Ads cus-tomers. We leverage state-of-the-art natural language generation (NLG) models to generate fluent ads from advertiser’s web pages in an abstractive fashion and solve practical is-sues such as factuality and inference speed. In addition, our system creates a customized ad in real-time in response to the user’s search query, therefore highlighting different aspects of… 

Figures and Tables from this paper

FAST: Improving Controllability for Text Generation with Feedback Aware Self-Training

FAST can improve the controllability and language quality of generated outputs when compared to state-of-the-art controllable text generation approaches.

References

SHOWING 1-10 OF 40 REFERENCES

Generating Better Search Engine Text Advertisements with Deep Reinforcement Learning

This work jointly train a model to minimize cross-entropy on an existing corpus of Landing Page/Text Ad pairs using typical sequence to sequence training techniques while also optimizing the expected click-through rate as predicted by an existing oracle model using SCST.

Ad Headline Generation using Self-Critical Masked Language Model

This work proposes a programmatic solution to generate product advertising headlines using retail content by jointly conditioning on multiple products that a seller wishes to advertise and demonstrates that the method outperforms existing Transformer and LSTM + RL methods in overlap metrics and quality audits.

Natural language generation for sponsored-search advertisements

A natural language generation system to automate these steps, preparing a list of terms for insertion into an ad template, and significantly outperforms baseline systems that use simple heuristics.

Automated snippet generation for online advertising

A method that produces in an automated manner compact text ads (promotional text snippets), given as input a product description webpage (landing page), to produce a small comprehensive ad while maintaining at the same time relevance, clarity, and attractiveness.

Selection and Generation: Learning towards Multi-Product Advertisement Post Generation

A novel end-to-end model named S-MG Net is proposed to generate the advertisement (AD) post and achieves impressive performance in terms of both automatic metrics as well as human evaluations.

Ad click prediction: a view from the trenches

The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.

CHASE: Commonsense-Enriched Advertising on Search Engine with Explicit Knowledge

While online advertising is one of the major sources of income for search engines, pumping up the incomes from business advertisements while ensuring the user experience becomes a challenging but

Controllable and Diverse Text Generation in E-commerce

A fine-grained controllable generative model that uses an algorithm borrowed from automatic control to precisely manipulate the diversity/accuracy trade-off of generated text and outperforms existing generative models in terms of diversity and relevance.

CTRL: A Conditional Transformer Language Model for Controllable Generation

CTRL is released, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior, providing more explicit control over text generation.

FastSeq: Make Sequence Generation Faster

The proposed optimization techniques include an attention cache optimization, an efficient algorithm for detecting repeated n-grams, and an asynchronous generation pipeline with parallel I/O that are general enough to be applicable to Transformer-based models.