Dice.com Bay Area Search - Beyond Learning to Rank Talk

59
Beyond Learning to Rank Additional Machine Learning Approaches to Improve Search Relevancy

Transcript of Dice.com Bay Area Search - Beyond Learning to Rank Talk

Beyond Learning to RankAdditional Machine Learning Approaches to Improve Search Relevancy

2

• Chief Data Scientist at Dice.com, under Yuri Bykov

• Key Projects using search:

Who Am I?

• Recommender Systems – more jobs like this, more seekers like

this (uses custom Solr index)

• Custom Dice Solr MLT handler (real-time recommendations)

• ‘Did you mean?’ functionality

• Title, skills and company type-ahead

• Relevancy improvements in Dice jobs search

3

• Supply Demand Analysis (see my blog post on this, with an

accompanying data visualization)

• “Dice Career” (new Dice mobile app releasing soon on the iOS

app store, coming soon to android)

• Includes salary predictor

• Dice Skills pages – http://www.dice.com/skills

Other Projects

PhD

• PhD candidate at DePaul University, studying natural language processing and

machine learning

Relevancy Tuning

Strengths and Limitations of Different Approaches

Relevancy Tuning – Common Approaches

• Gather a golden dataset of relevancy judgements

• Use a team of analysts to evaluate the quality of search results for a set of common user queries

• Mine the search logs, capturing which documents were clicked for each query, and how long the user

spent on the resulting webpage

• Define some metric that measure what you want to optimize you search engine for (MAP, NDCG, etc)

• Tune the parameters of the search engine to improve the performance on this dataset using either:

1. Manual Tuning

2. Grid Search – brute force search over a large list of parameters

3. Machine Learned Ranking (MLR) – train a machine learning model to re-rank the top n search results

• This improves precision – by reducing the number of bad matches returned by the system

Manual Tuning

Slow, manual task

Not very objective or scientific, unless validated by computing metrics over a dataset of relevancy judgements

Grid Search

Brute force search over all the search parameters to find the optimal configuration – naïve, does not

intelligently explore the search space of parameters

Very slow, needs to run 100’s or 1000’s of searches to test each set of parameters

Machine Learned Ranking

• Uses a machine learning model, trained on a golden dataset, to re-rank the search results

• Machine learning algorithms are too slow to rank all documents in moderate to large indexes

• Typical approach is to take the top N documents returned by the search engine and re-rank them

• What if those top N documents weren’t the most relevant?

• 2 possibilities –

1. Top N documents don’t contain all relevant documents (recall problem)

2. Top N documents contain some irrelevant documents (precision problem)

• MLR systems can be prone to feedback loops where they influence their own training data

Talk Overview

This talk will cover the following 3 topics:

1. Conceptual Search

• Solves the recall problem

2. How to Automatically Optimize the Search Engine Configuration

• Improves the quality of the top search results, prior to re-ranking

3. The problem of feedback loops, and how to prevent them

Part 1: Conceptual Search

13

Q. What is the Most Common Relevancy Tuning Mistake?

14

A. Ignoring the importance of RECALL

Q. What is the Most Common Relevancy Tuning Mistake?

15

Relevancy Tuning

• Key performance metrics to measure:

• Precision

• Recall

• F1 Measure - 2*(P*R)/(P+R)

• Precision is easier – correct mistakes in the top search results

• Recall - need to know which relevant documents don’t come back

• Hard to accurately measure

• Need to know all the relevant documents present in the index

16

What is Conceptual Search?

• A.K.A. Semantic Search

• Two key challenges with keyword matching:

• Polysemy: Words have more than one meaning

• e.g. engineer – mechanical? programmer? automation engineer?

• Synonymy: Many different words have the same meaning

• e.g. QA, quality assurance, tester; VB, Visual Basic, VB.Net

• Other related challenges -

• Typos, Spelling Errors, Idioms

• Conceptual search attempts to solve these problems by learning concepts

17

Why Conceptual Search?

• We will attempt to improve recall without diminishing precision

• Can match relevant documents containing none of the query terms

• I will the popular Solr lucene-based search engine to illustrate how you can

implement conceptual search, but the techniques used can be applied to any

search engine

Solr

18

Concepts

• Conceptual search allows us to retrieve documents by how similar the concepts

in the query are to the concepts in a document

• Concepts represent important high-level ideas in a given domain (e.g. java

technologies, big data jobs, helpdesk support, etc)

• Concepts are automatically learned from documents using machine learning

• Words can belong to multiple concepts, with varying strengths of association with

each concept

19

Traditional Techniques

• Many algorithms have been used for concept learning, include LSA (Latent

Semantic Analysis), LDA (Latent Dirichlet Allocation) and Word2Vec

• All involve mapping a document to a low dimensional dense vector (an array of

numbers)

• Each element of the vector is a number representing how well the document

represents that concept

• E.g. LSA powers the similar skills found in dice’s skills pages

• See From Frequency to Meaning: Vector Space Models of Semantics for more

information on traditional vector space models of word meaning

20

Traditional Techniques Don’t Scale

• LSA\LSI, LDA and related techniques rely on factorization of very large term-

document matrices – very slow and computationally intensive

• Require embedding a machine learning model within the search engine to map

new queries to the concept space (latent or topic space)

• Query performance is very poor – unable to utilize the inverted index as all

documents have the same number of concepts

• What we want is a way to map words not documents to concepts. Then we can

embed this in Solr via synonym filters and custom query parsers

21

Word2Vec and ‘Word Math’

• Word2Vec was developed by google around 2013 for learning vector

representations for words, building on earlier work from Rumelhart, Hinton and

Williams in 1986 (see paper below for citation of this work)

• Word2Vec Paper: Efficient Estimation of Word Representations in Vector

Space

• It works by training a machine learning model to predict the words surrounding

a word in a sentence

• Similar words get similar vector representations

• Scales well to very large datasets - no matrix factorization

22

“Word Math” Example

• Using basic vector arithmetic, you get some interesting patterns

• This illustrates how it represents relationships between words

• E.g. man – king + woman = queen

23

The algorithm learns to

represent different types of

relationships between

words in vector form

24

Why Do I Care? This is a Search Meetup…

25

Why Do I Care? This is a Search Meetup…

• This algorithm can be used to represent documents as vectors of concepts

• We can them use these representations to do conceptual search

• This will surface many relevant documents missed by keyword matching

• This boosts recall

• This technique can also be used to automatically learn synonyms

26

A Quick Demo

Using our Dice active jobs index, some example common user queries:

• Data Scientist

• Big Data

• Information Retrieval

• C#

• Web Developer

• CTO

• Project Manager

Note: All matching documents would NOT be returned by keyword matching

27

How?

GitHub- DiceTechJobs/ConceptualSearch:

1. Pre-Process documents – parse html, strip noise characters, tokenize words

2. Define important keywords for your domain, or use my code to auto extract top terms

and phrases

3. Train Word2Vec model on documents to produce a word2vec model

4. Using this model, either:

1. Vectors: Use the raw vectors and embed them in Solr using synonyms + payloads

2. Top N Similar: Or extract the top n similar terms with similarities and embed these as weighted

synonyms using my custom queryboost parser and tokenizer

3. Clusters: Cluster these vectors by similarity, and map terms to clusters in a synonym file

28

Define Top Domain Specific Keywords

• If you have a set of documents belonging to a specific domain, it is important

to define the important keywords for your domain:

• Use top few thousand search keywords

• Or use my fast keyword and phrase extraction tool (in GitHub)

• Or use a shingle filter to extract top 1 - 4 word sequences (ngrams) by

document frequency

• Important to map common phrases to single tokens, e.g. data scientist =>

data_scientist, java developer=>java_developer

29

Do It Yourself

• All code for this talk is now publicly available on GitHub:

• https://github.com/DiceTechJobs/SolrPlugins - Solr plugins to work with

conceptual search, and other dice plugins, such as a custom MLT handler

• https://github.com/DiceTechJobs/SolrConfigExamples - Examples of Solr

configuration file entries to enable conceptual search and other Dice plugins:

• https://github.com/DiceTechJobs/ConceptualSearch - Python code to

compute the Word2Vec word vectors, and generate Solr synonym files

30

Some Solr Tricks to Make this Happen

1. Keyword Extraction: Use the synonym filter to extract key words from your

documents

31

Some Solr Tricks to Make this Happen

1. Keyword Extraction: Use the synonym filter to extract key words from your

documents

• Alternatively (for Solr) use the SolrTextTagger (CareerBuilder use this)

• Both the synonym filter and the SolrTextTagger use a finite state transducer

• This gives you a very naïve but also very fast way to extract entities from text

• Uses a greedy algorithm – if there are multiple possible matches, takes the

one with the most tokens

• Recommended approach for fast entity extraction. If you need a more

accurate approach, train a Named Entity Recognition model

32

33

Some Solr Tricks to Make this Happen

1. Keyword Extraction: Use the synonym filter to extract key words from your

documents

2. Synonym Expansion using Payloads:

• Use the synonym filter to expand a keyword to multiple tokens

• Each token has an associated payload – used to adjust relevancy scores at

index or query time

• If we do this at query time, it can be considered query term expansion, using

word vector similarity to determine the boosts of the related terms

34

35

Synonym File Examples – Vector Method (1)

• Each keyword maps to a set of tokens via a synonym file

• Example Vector Synonym file entry (5 element vector, usually 100+ elements):

• java developer=>001|0.4 002|0.1 003|0.5 005|.9

• Uses a custom token filter that averages these vectors over the entire

document (see GitHub - DiceTechJobs/SolrPlugins)

• Relatively fast at index time but some additional indexing overhead

• Very slow to query

36

Synonym File Examples – Top N Method (2)

• Each keyword maps to a set of most similar keywords via a synonym file

• Top N Synonym file entry (top 5):

• java_developer=>java_j2ee_developer|0.907526 java_architect|0.889903

lead_java_developer|0.867594 j2ee_developer|0.864028 java_engineer|0.861407

• Can configure this in solr at index time with payloads, a payload aware query parser

and a payload similarity function

• Or you can configure this at query time with a special token filter that converts

payloads into term boosts, along with a special parser (see GitHub -

DiceTechJobs/SolrPlugins)

• Fast at index and query time if N is reasonable (10-30)

37

Searching over Clustered Terms

• After we have learned word vectors, we can use a clustering algorithm to

cluster terms by their vectors to give clusters of related words

• Each keyword is mapped to it’s cluster, and matching occurs between clusters

• Can learn several different sizes of cluster, such as 500, 1000, 5000 clusters

• Apply stronger boosts to fields with smaller clusters (e.g. the 5000 cluster field)

using the edismax qf parameter - tighter clusters get more weight

• Code for clustering vectors in GitHub - DiceTechJobs/ConceptualSearch

38

Synonym File Examples – Clustering Method (3)

• Each keyword in a cluster maps to the same artificial token for that cluster

• Cluster Synonym file entries:

• java=>cluster_171

• java applications=>cluster_171

• java coding=>cluster_171

• java design=>cluster_171

• Doesn’t use payloads so does not require any special plugins

• No noticeable impact on query or indexing performance

39

Example Clusters Learned from Dice Job Postings

• Note: Labels in bold are manually assigned for interpretability:

• Natural Languages: bi lingual, bilingual, chinese, fluent, french, german, japanese,

korean, lingual, localized, portuguese, russian, spanish, speak, speaker

• Apple Programming Languages: cocoa, swift

• Search Engine Technologies: apache solr, elasticsearch, lucene, lucene solr,

search, search engines, search technologies, solr, solr lucene

• Microsoft .Net Technologies: c# wcf, microsoft c#, microsoft.net, mvc web, wcf web

services, web forms, webforms, windows forms, winforms, wpf wcf

40

Example Clusters Learned from Dice Job Postings

Attention\Attitude:

attention, attentive, close attention, compromising, conscientious, conscious, customer oriented,

customer service focus, customer service oriented, deliver results, delivering results,

demonstrated commitment, dependability, dependable, detailed oriented, diligence, diligent, do

attitude, ethic, excellent follow, extremely detail oriented, good attention, meticulous, meticulous

attention, organized, orientated, outgoing, outstanding customer service, pay attention,

personality, pleasant, positive attitude, professional appearance, professional attitude, professional

demeanor, punctual, punctuality, self motivated, self motivation, superb, superior, thoroughness

41

Ideas for Future Work

• Use a machine learning algorithm to learn a sparse representation of the word vectors

• Use a word vectors derived from a word-context matrix instead of word2vec – gives a sparse

vector representation of each word (e.g. HAL – Hyperspace Analogue of Language)

• Incorporate several different vector space models that focus on different aspects of word

meaning, such as Dependency-Based Word Embeddings (uses grammatical relationships)

and the HAL model mentioned (weights terms by their proximity in the local context).

42

Conceptual Search Summary

• It’s easy to overlook recall when performing relevancy tuning

• Conceptual search improves recall while maintaining high precision by matching documents

on concepts or ideas.

• In reality this involves learning which terms are related to one another

• Word2Vec is a scalable algorithm for learning related words from a set of documents, that

gives state of the art results in word analogy tasks

• We can train a Word2Vec model offline, and embed it’s output into Solr by using the in-built

synonym filter and payload functionality, combined with some custom plugins

• Video of my Lucene Revolution 2015 Conceptual Search Talk

Part 2 – How to Automatically Optimize the Search Engine Configuration

Search Engine Configuration

•Modern search engines contain a lot of parameters that can impact the relevancy of the results

•These tend to fall into 2 main areas:

1. Query Configuration Parameters

•Control how the search engine generates queries and computes relevancy scores

•What boosts to specify per field? What similarity algorithm to use? TF IDF, BM25, custom?

•Disable length normalization on some fields?

•Boosting by attributes - how to boost by the age or location of the document

2. Index and Query Analysis Configuration

•Controls how documents and queries get tokenized for the purpose of matching

•Use stemming? Synonyms? Stop words? Ngrams?

How Do We Ensure the Optimal Configuration?

•Testing on a golden test set:

•Run the top few hundred or thousand queries and measure IR metrics

•Takes a long time to evaluate each configuration – minutes to hours

•Common Approaches:

•Manual Tuning?

•Slow (human in the loop) and subjective

•Grid Search – try every possible combination of settings?

•Naïve – searches the entire grid, does not adjust its search based on its findings

•Slow test time limits how many configurations we can try

•Can we do better?

Can We Apply Machine Learning?

•We have a labelled dataset of queries with relevancy judgements

•Can we frame this as a supervised learning problem, and apply gradient descent?

•No…

•Popular IR metrics (MAP, NDCG, etc) are non-differentiable

•The set of query configuration and analysis settings cannot be optimized in this way as a

gradient is unavailable

Solution: Use a Black Box Optimization Algorithm

•Use an optimization algorithm to optimize a ‘black box’ function

•Black box function – provide the optimization algorithm with a function that takes a set of

parameters as inputs and computes a score

•The black box algorithm will then try and choose parameter settings to optimize the score

•This can be thought of as a form of reinforcement learning

•These algorithms will intelligently search the space of possible search configurations to arrive at a

solution

Implementation

•The black box function:

• Inputs: – search engine configuration settings

•Output: - a score denoting the performance of that search engine configuration on the golden test

set

•Example Optimization Algorithms:

•Co-ordinate ascent

•Genetic algorithm

•Bayesian optimization

•Simulated annealing

Implementation

•The problem

•Optimize the configuration parameters of the search engine powering our content-based recommender

engine

• IR Metric to optimize

•Mean Average Precision at 5 documents, averaged over all recommender queries

•Train Test Split:

•80% training data, 20% test data

•Optimization Library :

•GPyOpt – open source Bayesian optimization library from the university of Sheffield, UK

•Result:

•13% improvement in the MAP @ 5 on the test dataset

Ideas for Future Work

•Learn a better ranking function for the search engine:

•Use genetic programming to evolve a ranking function that works better on our dataset

•Use metric learning algorithms (e.g. maximum margin nearest neighbors)

Part 3: Beware of Feedback Loops

Building a Machine Learning System

1. Users interact with the system to

produce data

2. Machine learning algorithms turn

that data into a modelUsers Interact

with the System

Data

ModelMachine Learning

Produce

Building a Machine Learning System

1. Users interact with the system to

produce data

2. Machine learning algorithms turn

that data into a model

What happens if the model’s

predictions influence the user’s

behavior?

Users Interact with the System

Data

ModelMachine Learning

Produce

Feedback Loops

1. Users produce labelled data

2. Machine learning algorithms turn

that data into a model

3. Model changes user behavior,

modifying its own future training

data

Data

ModelMachine Learning

Produce

Model changes behavior

Users Interact with the System

Direct Feedback Loops

• If the predictions of the machine learning system alter user behavior in such a way as to affect its own

training data, then you have a feedback loop

•Because the system is influencing its own training data, this can lead to gradual changes in the

system behavior over time

•This can lead to behavior that is hard to predict before the system is released into production

•Examples – recommender systems, search engines that use machine learned ranking models, route

finding systems, ad-targeting systems

Hidden Feedback Loops

•You can also get complex interactions when you have two separate machine learning models that

influence each other through the real world

•For example, improvements to a search engine might result in more purchases of certain products,

which in turn could influence a recommender system that uses item popularity as one of the features

used to generating recommendations

•Feedback loops, and many other challenges involved in building and maintaining machine learning

systems in the wild are covered in these two excellent papers from google:

•Machine Learning: The High-Interest Credit Card of Technical Debt

•Hidden Technical Debt in Machine Learning Systems

Preventing Feedback Loops

1. Isolate a subset of data from being influenced by the model, for use in training the system

•E.g. generate a subset of recommendations at random, or by using an unsupervised model

•E.g. leave a small proportion of user searches un-ranked by the MLR model

2. Use a reinforcement learning model instead (such as a multi-armed bandit) - the system will

dynamically adapt to the users’ behavior, balancing exploring different hypotheses with exploiting

what it’s learned to produce accurate predictions

Summary

•To improve relevancy in a search engine, it is important first to gather a golden dataset of relevancy

judgements

•Machine learned ranking is an excellent approach for improving relevancy.

•However, it mainly focuses on improving precision not recall, and relies heavily on the quality of the

the top search results

•Conceptual search is one approach for improving the recall of the search engine

•Black box optimization algorithms can be used to auto-tune the search engine configuration to

improve relevancy

•Search and recommendation engines that use supervised machine learning models are prone to

feedback loops, which can lead to poor quality predictions

END OF TALK – Questions?