Yahoo! Research at KDD 2021

NEWS
Aug 13, 2021

Yahoo Research is excited to be a silver sponsor of KDD 2021, where we will present three publications and one tutorial.   

 

Publications

MEOW: A Space-Efficient Non-Parametric Bid Shading AlgorithmWei Zhang, Brendan Kitts, Yanjun Han, Zhengyuan Zhou, Tingyu Mao, Hao He, Shengjun Pan, Aaron Flores, San Gultekin, Tsachy Weissman

Abstract: Bid Shading has become increasingly important in Online Advertising, with a large amount of commercial and research work recently published, often in top applied conferences such as KDD. Most approaches for solving the bid shading problem involve estimating the probability of win distribution and then maximizing surplus. These generally use parametric assumptions for the distribution, and there has been some discussion as to whether Log-Normal, Gamma, Beta, or other distributions are most effective. In this paper, we show evidence that online auctions generally diverge in interesting ways from classic distributions. In particular, real auctions generally exhibit significant structure, due to the way that humans set up campaigns and inventory floor prices. Using these insights, we present a Non-Parametric method for Bid Shading which enables the exploitation of this deep structure. The algorithm has low time and space complexity and is designed to operate within the challenging millisecond Service Level Agreements of Real-Time Bid Servers. We deploy it in one of the largest Demand Side Platforms in the United States and show that it reliably outperforms comparable Parametric benchmarks. We conclude by suggesting some ways that the best aspects of Parametric and Non-Parametric approaches could be combined.

Keywords: bid; bid shading; auction; first price; nonparametric.
 

Efficient deep distribution network for bid shading in First-Price AuctionsTian ZhouHao HeShengjun PanNiklas KarlssonBharatbhushan ShettyBrendan KittsDjordje Gligorijevic, Junwei Pan, San GultekinTingyu Mao, Jianlong Zhang and Aaron Flores

Abstract: Since 2019, most ad exchanges and sell-side platforms (SSPs), in the online advertising industry, shifted from second to first-price auctions. Due to the fundamental difference between these auctions, demand-side platforms (DSPs) have had to update their bidding strategies to avoid bidding unnecessarily high and hence overpaying. Bid shading was proposed to adjust the bid price intended for second-price auctions, in order to balance cost and winning probability in a first-price auction setup. In this study, we introduce a novel deep distribution network for optimal bidding in both open (non-censored) and closed (censored) online first-price auctions. Offline and online A/B testing results show that our algorithm outperforms previous state-of-art algorithms in terms of both surplus and effective cost per action (eCPX) metrics. Furthermore, the algorithm is optimized in run-time and has been deployed into VerizonMedia DSP as a production algorithm, serving hundreds of billions of bid requests per day. Online A/B test shows that advertiser's ROI are improved by +2.4%, +2.4%, and +8.6% for impression-based (CPM), click-based (CPC), and conversion-based (CPA) campaigns respectively.

Keywords: Online bidding, bid shading, factorization machine, distribution learning.
 

VisualTextRank: Unsupervised Graph-based Content Extraction for Automating Ad Text to Image SearchShaunak Mishra, Mikhail Kuznetsov, Gaurav Srivastava, Maxim Sviridenko

Abstract: Numerous online stock image libraries offer high-quality yet copyright-free images for use in marketing campaigns. To assist advertisers in navigating such third-party libraries, we study the problem of automatically fetching relevant ad images given the ad text (via a short textual query for images). Motivated by our observations in logged data on ad image search queries (given ad text), we formulate a keyword extraction problem, where a keyword extracted from the ad text (or its augmented version) serves as the ad image query. In this context, we propose VisualTextRank: an unsupervised method to (i) augment input ad text using semantically similar ads, and (ii) extract the image query from the augmented ad text. VisualTextRank builds on prior work on graph-based context extraction (biased TextRank in particular) by leveraging both the text and image of similar ads for better keyword extraction and using advertiser category-specific biasing with sentence-BERT embeddings. Using data collected from the Verizon Media Native (Yahoo Gemini) ad platform’s stock image search feature for onboarding advertisers, we demonstrate the superiority of VisualTextRank compared to competitive keyword extraction baselines (including an 11% accuracy lift over biased TextRank). For the case when the stock image library is restricted to English queries, we show the effectiveness of VisualTextRank on multilingual ads (translated to English) while leveraging semantically similar English ads. Online tests with a simplified version of VisualTextRank led to a 28.7% increase in the usage of stock image search, and a 41.6% increase in the advertiser onboarding rate in the Verizon Media Native ad platform.

Keywords: Online advertising; image search; multilingual; content extraction.

 

Tutorial

Online Advertising Incrementality Testing And Experimentation: Industry Practical Lessons

Author(s): Joel Barajas (Yahoo Research, Verizon Media)*; Narayan Bhamidipati (Yahoo Research, Verizon Media); JAMES G SHANAHAN (Church and Duncan Group)