Finding nearest neighbors using K-D Tree

Given N points {Pi} on a 1-D straight line along with their absolute positions {xi} along the line, which are the k nearest points to any randomly selected test point Pj on the line ? We find the absolute distance of Pj from all other points {Pi} i.e. |xj – xi | for all i ≠j, and then sort the points in increasing order of their distances and take out the first k distances. Although we do not need to store all the distances, but only need a max-heap of size k, which we will discuss later.

The same logic could be extended to points in two dimensions, with co-ordinates {(xi , yi )}. But now instead of finding distances in one dimension, we find euclidean distances in two dimensions i.e. D(Pj , Pi ) = (xj – xi )2 + (yj – yi )2. For points in d-dimensions, we find the d-dimensional distances of test point Pj with all other points Pi :

D(Pj , Pi ) = (x1, j – x1, i )2 + (x2, j – x2, i )2.+ … + (xd, j – xd, i )2, where xm, j denotes the position of point Pj along the mth dimension

and either sort or use max-heap to get the nearest k points to Pj . The running time for each distance computation is O(d) and since there are N points, we have O(N*d) run-time complexity for all distance computations. Next if we sort the distances that would add further O(N*log N) complexity compared to O(N*log k) complexity when keeping a max-heap of size k. Thus for each test point Pj the total run-time complexity comes to O(N*d + N*log k) for finding the k-nearest neighbours.

These kind of problems frequently appear in classification also. For example in text classification, given a document to classify, we find the “distances” of the document to other documents in the training set and then for a value of k, we select the nearest k documents, and based on the class information of each of the k documents, we classify our test document to the class which has the majority participation among the k nearest documents. The concept of distance in this case, comes from the fact that a document can be considered a vector in d-dimensional space, where ‘d’ is the number of features present in the document set. In the most simplistic representation, the entry for document D and feature F would be 1 if feature F is present in document D, else 0. Other approaches are weighting the entries using Term Frequencies and Inverse Document Frequencies.

For understanding K-D tree lets assume that we need to find the nearest point only, to a given test point. In our initial problem where all the points were lying on a 1-D straight line, we randomly select one of the points and make it as root of the BST. Then all points to the left of this point on the number line come to the left sub-tree of the BST and all points to the right, are on the right sub-tree of the BST. Then to find the nearest point for the test point Pj with position xj we compare xj  with root. If root is less than xj , then search for the nearest point on the right sub-tree, since any point on the left sub-tree would be further away. We recursively traverse the right sub-tree until we could not traverse further, in which case we either return the node value where we stopped or return the value of its parent node, whichever is “nearer”. Similar logic applies if xj is less than root, then in that case we search the left sub-tree of the root. Thus we see that our search for finding the nearest neighbour reduces to the problem of searching in a BST when the points can be represented as nodes of a BST.

oned

But what if the points are two dimensional or multidimensional. There are no multidimensional binary search trees. How to split the tree on a particular node with multidimensional values ? For the case of two dimensional points, we randomly select a point (x0 , y0 ) as the root, then choosing the x-axis as the splitting axis, we keep all other points with xi < x0 , to the left of the root and all other points with xi >= x0 , to the right of the root.

2d1

Then at the next level, i.e. the left sub-tree and the right sub-tree, we repeat the above step by choosing a random point (x1 , y1 ) on the left sub-tree, but now instead of the x-axis, we split on the y-axis, i.e. all other points on the left sub-tree with yi < y1 , goes to the left of (x1 , y1 ) and all points with yi >= y1 , goes to the right of (x1 , y1 ). Similar step is followed with the right sub-tree. We alternate splitting on x-axis and y-axis at each level of the two dimensional BST. For d-dimensions, at each level L of the tree, we select the axis L%d (assuming that the axes are represented as indices, i.e. {0, 1, 2, …d-1}) as the splitting criteria. This is known as K-D Tree.

2d2

For exact searching of a point in a K-D Tree, we either find the point we are looking for by recursively traversing the tree or the point does not exists, in which case we return false. In the case of nearest neighbour, earlier we saw that for the case of points in a 1-dimensional straight line, we compared the distance of the test point, with the best candidate node and the parent of this node for the nearest distance, since the test point lies in between the node and its parent, both are potential nearest neighbours.

2d3

In the case of two dimensions, we consider a circle around the test point, with radius equal to the distance of the best candidate node (which is our best guess for nearest neighbour). If there are any other node lying within this circle then, we consider the nearest neighbour as the one having the minimum distance from the test point within this circle. For the general case of d-dimensional space, we would have a hypersphere centred at the test point and passing through our estimated nearest neighbour after recursively traversing down the tree.

2d4

But how do we prune the tree to include only potential nearest neighbours ? If suppose we find the best current estimate for the nearest neighbour after the split on the ith axis, then if the hypersphere centred at the test point and passing through the best estimated node, cuts through the ith axis then any other potential nearest neighbours could also be on the “other side” of the ith axis, in which case, we also have to search the “other side”, else we continue to search recursively on the “same side” of the best current estimate.

2d5But how to determine whether the hypersphere crosses the splitting hyperplane ? In two dimensions, if the split is on the y-axis, then this line is denoted by the equation y = y0 . Then any circle centred at (x1 , y1 ) crosses this line only if |y1-y0 | is greater than the radius of the circle. Similar is the explanation with d-dimensions, where instead of circle and line, we have hypersphere and hyperplane.

A node in the K-D Tree would have the following structure :

 

struct Node { Node* left; Node* right; int splittingAxis; double[] values};


 

where “values” are the position of the Node along each axis and “splittingAxis” is the axis which was used for splitting this Node. Following is the pseudocode for finding nearest neighbour in K-D Tee :


Node* NearestNeighbourSearch(Node* curr, Node* testNode, int dim) {

Node* currentNode = curr, otherSide, bestGuess1, bestGuess2;

double currentNodeValue = currentNode->value;
double currentDistance = distance(testNode, currentNode);

double diff = currentNodeValue - testNode->values[currentNode->splittingAxis];

if (diff < 0) {curr = currentNode->right; otherSide = currentNode->left;}
else {curr = currentNode->left; otherSide = currentNode->right;}

bestGuess1 = NearestNeighbourSearch(curr, testNode);
double distance1 = distance(testNode, bestGuess1);
double distance2;

if (distance1 > abs(diff)) {
bestGuess2 = NearestNeighbourSearch(otherSide, testNode);
distance2 = distance(testNode, bestGuess2);
} 
else distance2 = Inf;

if (distance1 < distance2 && distance1 < currentDistance) return bestGuess1;
else if (distance2 < distance1 && distance2 < currentDistance) return bestGuess2;
else return currentNode;
} 




To find the k-Nearest Neighbours, we have to modify the above pseudo-code, to return a max-heap (priority queue) of Nodes of size k, where the max-heap would be ordered based on the distances of the Nodes from the the test Node. If the heap is of size less than k, then we insert the best estimated Node among “bestGuess1”, “bestGuess2” and “currentNode” into heap and then “heapify” it, else if size of heap is already k, we compare the smallest distance among distance1, distance2 and currentDistance, with the root Node of the max-heap (whose distance is currently maximum among all the Nodes in the heap), if this smallest distance is less than root of max-heap, we pop the root of the heap and insert the corresponding Node and then heapify again, else we do nothing. After we come out of the recursion, our final max-heap would contain the k-Nearest Neighbours of the test Node.

Machine Learning Fundamentals : EM Algorithm

Earlier we had seen, that given a set of independent and identically distributed observations drawn from a probability distribution with unknown parameters, we can estimate the parameters of the distribution by maximizing the (log-)likelihood of the distribution generating the observation. We took an example of a series of coin tosses which is drawn from the binomial distribution. But it may not always be analytically tractable to maximize the expression for the distribution.

Consider again the example of the coin toss. But this time instead of one coin, there are two coins A and B. We observe the outcome for N consecutive tosses. Lets say that the probability of coin A coming up heads is p, while the probability of coin B coming up heads is q. Also if we know, which coin was used for which toss, then using Maximum likelihood as earlier, the parameter p would be equal to (# heads with coin A)/(# tosses with coin A) and similarly for q, (# heads with coin B)/(# tosses with coin B). This was straightforward as we had all the information to estimate p and q directly.

But what if we have incomplete information and using incomplete information we need to estimate the unknown parameters. In the example above we already knew which coin was used for which toss, but what if we do not know which coin generated which outcome, then we cannot directly use MLE to estimate the unknown parameters p and q. However, if we had some way of completing the data (in our case, guessing correctly which coin was used in each of the tosses), then we could reduce parameter estimation for this problem with incomplete data to maximum likelihood estimation with complete data.

This is where EM algorithm comes into play. Its a two step iterative algorithm, in the ‘E’ step, using initial parameters (p(0), q(0)), we estimate which of the coins A and B could have generated each outcome. Then using this estimations, in the ‘M’ step, we estimate the new set of parameters (p(1), q(1)) using our old maximum likelihood estimation method. We repeat the above steps, i.e. using (p(t), q(t)) in the ‘E’ step and estimating (p(t+1), q(t+1)) in the ‘M’ step, until the maximum likelihood in attains a local or a global maxima.

Before starting we assume that we have the complete data. In the ‘E’ step for the (t+1)th iteration, we find the probability distributions of the coin A and B generating each outcome, by first finding the expectation of the log likelihood of the complete data with the parameters found in the tth iteration, and then maximizing this expectation. For a more general case of M coins and N tosses, denoting the observed outcomes by Xi , the probability of heads by Θi and the incomplete data by Yi , i.e. if Yi = k, it means that the ith outcome is generated by the kth coin. Also lets assume that we know the prior probabilities of choosing each of the coins, lets assume it is αi i.e. P(Yi = k) = αk .

Then the log likelihood of the complete data (assuming that Y is available) is:

log(L(Θ | X, Y)) = log(P(X, Y | Θ)), by Bayes Theorem

P(Xi , Yi | ΘYi ) = P(Xi | YiYi ) P(Yi | ΘYi ) = αYi P(Xi | YiYi )

because P(Yi | ΘYi ) = P(Yi ) =  αYi are the prior probabilities. Hence:

log(L(Θ | X, Y)) = ∑i log (αYi P(Xi | YiYi ))

The posterior probabilities of Yi given the observed outcomes Xi , is

P(Yi | Xi , ΘYi ) = P(Xi , Yi | ΘYi )/P(Xi | ΘYi ) according to Bayes Theorem. But

P(Xi | ΘYi ) = ∑Yi  P(Xi | YiYi ) P(Yi | ΘYi ) = ∑Yi  αYi P(Xi | YiYi )

Thus, P(Yi | Xi , ΘYi ) = (αYi P(Xi | YiYi ))/(∑Yi  αYi P(Xi | YiYi ))

In the ‘E’ step of the algorithm, we find the quantity

Q(Θ, Θ(t)) = ∑Y log(L(Θ | X, Y)) P(Y | X , Θ(t) )

where Θ(t) is the value of the unknown parameters (probabilities of heads for each coin) in the tth iteration. If t=0, we assume some initial values for the set Θ(t) and at each iteration, we improve upon the values. Q(Θ, Θ(t)) is the expectation of the log(L(Θ | X, Y)). Thus for each Y=k, we multiply the log likelihood with the probability of Y=k, and sum over to find expectation :

‘E’ Step : Q(Θ, Θ(t)) = ∑Y log(L(Θ | X, Y=k)) P(Y=k | X , Θ(t) )

Maximizing the above quantity using Lagrangian multipliers (omitting the derivation), we get the new values of

α(t+1)k = (1/N) ∑i P(k | Xi, Θ(t) )

In the ‘M’ step, differentiating Q(Θ, Θ(t)) w.r.t Θ, and setting them to zero, and using α(t+1)k , yields Θ(t+1). For the case of two coins with probabilities p and q, we get :

p(t+1) = ∑i Xi P(k | Xi, p(t), q(t))/ ∑i P(k | Xi, p(t), q(t) ), and

q(t+1) = ∑i Xi (1 – P(k | Xi, p(t), q(t) ))/ ∑i (1 – P(k | Xi, p(t), q(t) ))

The values of P(k | Xi, p(t), q(t)) for the ith outcome could be inferred from the training samples available. Whenever Q(Θ, Θ(t)) hits local maxima, we stop the iteration and finally report the values of p and q. The expectation maximization algorithm is
a natural generalization of maximum likelihood estimation to the incomplete data case. It finds use while estimating hidden parameters in the Hidden Markov Models, which we will explore next.

Machine Learning Fundamentals : Maximum Likelihood Estimation

On tossing a coin 10 times, head lands 7 times while tail lands 3 times. Assuming that the coin can be either fair or biased, we do not know the probability distribution of each outcome. Lets suppose that the probability of landing heads is p, then the probability of tails would be (1-p). What is the likelihood that this probability distribution generated the outcome i.e. the probability of 7 heads and 3 tails ? Since the tossing of a coin follows the Binomial Distribution, we have

L(p) = 10! / (7! 3!) p7(1-p)3 = 120 p7(1-p)3

According to the principle of Maximum Likelihood Estimation, the value of p should be as to maximize the Log-likelihood of the outcome. Computing the Log-likelihood:

Log L(p) = log 120 + 7 log p + 3 log (1-p)

To maximize the Log-likelihood, we differentiate the above w.r.t. p and set it to zero, i.e. (7/p) – 3/(1-p) = 0 or p = 0.7.

MLE is used to estimate the parameters of a probability distribution given a set of independent and identically distributed observations. In the above case it was the distribution of a coin. The same logic could be extended to estimate parameters of complex probability distributions.

Given a sequence of numbers x1, x2, …, xN, following a Normal distribution N(μ, σ) with the parameters μ and σ. Before we estimate the parameters μ and σ, we need to find out what is the likelihood that N(μ, σ) generated the sequence of numbers  x1, x2, …, xN :

L(μ, σ | xi) = ∏i (1/√2π) (1/σ) exp(-(xi – μ)2/2σ2)

L(μ, σ | xi) = (1/√2π)N  (1/σ)N  exp(-(1/2σ2) ∑i (xi – μ)2)

Log L(μ, σ | xi) = N log (1/√2π) – N log σ – (1/2σ2)  ∑i (xi – μ)2

Taking derivative of the above Log likelihood w.r.t. μ, and setting it to zero, we get

μ = (x1 + x2 + …+ xN)/N,

which is the mean of the sequence of numbers. Next taking derivative w.r.t. σ and setting it to zero we get

σ2 = (1/N) ∑i (xi – μ)2

Thus σ2 is the variance of the sequence of numbers, or σ is the standard deviation. Given a random observation and the probability distribution generating the observation, we can use MLE to derive the parameters for the distribution that generated the observation.

Document Sequencing using Dynamic Programming

Sectors dealing with large volumes of documents usually deploy batch scanning software for storing the documents in electronic format as image files. This software scans thousands of paper documents as a single batch and stores them in designated folder structure, and as a result multi-page documents lose their sequence information. With reference to the financial industry, scanning a bunch of 6 pages of a Mortgage application and 4 pages of an IRS Form using a batch scanner, could possibly end up numbering the pages such that the first 4 images are from the Mortgage application, the next two images from the IRS Form, the next 3 from the remaining Mortgage Application pages and the last remaining from the IRS Form.

There are techniques like putting 1-D or 2-D bar codes on the first pages of each document for separation. We will use an algorithmic technique to automatically sequence and separate the documents from the scanned image files. Firstly we will do OCR on the image files to convert them to raw text files and then use Supervised learning algorithms for textual classification.

The problem of classifying a page into a Loan Application, or a Mortgage or a Credit Report can be achieved with good enough accuracy using a supervised learner like a Support Vector Machine, or a Random Forest or an Ensemble of learners. We would usually train the model with classes such as Mortgage or Credit Report and so on. But when we need to identify sequences and sequence boundaries, we have to modify the model to consider the page type of a document along with the document type i.e. now instead of Mortgage, we would model with classes Mortgage First Page, Mortgage Middle Page and Mortgage Last Page.

Given our trained model and a bunch of pages to classify and sequence, the model predicts document type as well as the page type for each page in the bunch along with the confidence scores for the prediction. Lets denote the pages as Ai, the predicted document types as di, the predicted page types as pi and the confidence scores as si for i 1 to N. Assuming that the probability distribution of the confidence scores si follow a normal distribution, then any sequence of length k formed with the pages Aj, Aj+1, …, Aj+k-1, would have the most likely score which is the mean of the scores for each page in the sequence i.e.

Sj, j+k-1 = (sj + sj+1 + … + sj+k-1)/k,

where Sj, j+k-1 denotes the score of sequence starting at j of length k. Then the probability that the page Aj, with score sj, belongs to a sequence with score Sj, j+k-1 is proportional to

exp(-(Sj, j+k-1 – sj)2)

Thus the probability that each of the pages Aj, Aj+1, …, Aj+k-1, belongs to a sequence with score Sj, j+k-1 is the product of the above probabilities for each score si i.e.

Pj, j+k-1 = ∏i exp(-(Sj, j+k-1 – si)2) = exp(-∑i (Sj, j+k-1 – si)2) for i = j to j+k-1

For pages Ak, and Ak-1, if the predicted document types dk ≠ dk-1, then we assume that the page Ak cannot be in the same sequence as Ak-1, hence the probability of page Ak belonging to the previous sequence is 0, but Ak can start another sequence or it can be itself be a standalone sequence. In cases where the page Ak has a very low score then we can safely ignore the condition that dk ≠ dk-1,and consider it to be of the same document type as Ak-1 i.e. dk = dk-1 and continue forming a sequence without breaking.

Since we have modelled and predicted the page type for each page i.e. “First”, “Middle” and “Last”, we can use it for sequence formation. For example we can set rules that a sequence will only start at “First” page, or ends in a “Last” Page or a “Middle” page followed by “Middle” Pages and “Last” Pages in a sequence. In multi-page documents, the “First” page is discriminative from “Middle” and “Last” pages, but since a “Middle” page of one sample could be similar to the “Last” page of another sample and vice-verse in the training set, hence the “Middle” and “Last” pages are not very discriminative from each other in the predicted bunch. Thus we will only use the rule that a sequence starts at a “First” page.

Firstly we manually sequence a bunch of pages based on the “First” pages and if the predicted document type dk ≠ dk-1. After this we have multiple sequences from the original bunch of pages. Is it possible to further sub-sequence each sequence ?  Lets say that for the sequence {Aj, Aj+1, …, Aj+k-1} starting at the j-th index and of length k, the probability of the sequence is Pj, j+k-1 . Is there an index ‘v’, such that j ≤ v < j+k-1 and Pj, v * Pv+1, j+k-1 > Pj, j+k-1 ? If there is an index ‘v’ which satisfies the mentioned criteria then we further sub-sequence the sequence starting at j of length k, because it maximizes the probability of the sequence. If there are multiple such ‘v’ s then select the one for which Pj, v * Pv+1, j+k-1 is maximum, i.e.

arg maxv  {Pj, v * Pv+1, j+k-1 }> Pj, j+k-1

Again for each sub-sequence formed we can recursively sub-sequence each if we can find an index, at which if we separate we get a higher probability of each sub-sequence. This procedure would produce the maximum probable sequence. One way to do this is to use recursion on each sub-sequence and find the appropriate partitions. But recursion would do redundant calculations and we know the tool to solve the problem, yes Dynamic Programming.

For the sequence {Aj, Aj+1, …, Aj+k-1} find all 1-length sub-sequences. Each 1-length sub-sequences would have the probability equal to 1 since the mean of the score is the score itself i.e. Pj, j = 1. Then for all 2-length sub-sequences, {Aj, Aj+1}. For a 2-length sub-sequence if Pj, j * Pj+1, j+1 > Pj, j+1 then we have separate sub-sequences {Aj} {Aj+1} and updated Pj, j+1 = Pj, j * Pj+1, j+1 , else we have an “unbroken” sequence {Aj, Aj+1}. Extending this calculation to some q-length (q < k) sub-sequence, find

Q = arg maxv {Pj, v * Pv+1, j+q-1} for some j ≤ v < j+q-1

If Q > Pj, j+q-1 then split the the sub-sequence {Aj, Aj+1, …, Aj+q-1} at ‘v’, {Aj, Aj+1, …, Av} {Av+1, Aj+1, …, Aj+q-1} and assign the new and updated probability Pj, j+q-1 = Q. This works because we have already found out the probabilities Pj,v and Pv+1, j+q-1 in earlier steps and also the probabilities Pj, v and Pv+1, j+q-1 are optimal for the sub-sequences starting at j and ending at v and starting at v+1 and ending at j+q-1 respectively. Finally we do the calculations for the k-length sub-sequence and find ‘v’ for which Pj, v * Pv+1, j+k-1 is maximum and then partition the sequence into {Aj, Aj+1, …, Av} {Av+1, Aj+1, …, Aj+k-1}.

After finding the partition for the k-length sequence, how do we recover all the partitions for sub-sequences within the k-length sequence. We do it recursively. We use an array to track the partitions for each q-length sub-sequences in the earlier steps. After finding the partition ‘v’ for the k-length sequence, we recursively find the partitions for the v-j+1 length sub-sequence on the left and j+k-v length sub-sequence on the right using the array of tracked partitions.

Feature Selection using Kullback-Liebler Divergence

The Kullback-Liebler (KL) divergence is a fundamental equation of information theory that quantifies the proximity of two probability distributions. If the true probability distribution of a sequence of events is denoted by P, then for any distribution Q that is used to model or approximate the true distribution P, the information lost in the process is the KL Divergence.

DKL (P||Q) = ∑i Pi log2 (Pi / Qi)

As you can see that this quantity can be related to the mutual information that we had used earlier for feature selection. Mutual Information is the KL Divergence of the joint distribution of feature and class P(Class, Feature) from the product of their marginal distributions P(Class)*P(Feature).

Given that we know the joint distribution P(Class, Feature) i.e. the probability a document containing both feature F and belonging to class C, if we approximate it with P(Class)*P(Feature), our approximation is correct if and only if the feature F is independent of class C, but if given that, for documents belonging to class C, the probability of feature F occurring in those documents is higher than any other documents, then our approximation has errors as it does not account for this additional information gain. Higher the KL divergence, higher is the divergence of the feature from being equally probable in all classes.

Given that the feature F is contained inside N documents, and it is observed that out of N documents, N1 belongs to class c1, N2 to class c2 and N3 to class c3. What is the likelihood of observing this distribution given that the feature is equally likely to be present in all the 3 classes ?

Likelihood = N!/(N1! N2! N3!) (1/3)N

Then finding the average log likelihood of the above quantity, we get :

L = (1/N) log (N!/(N1! N2! N3!) (1/3)N)

Using Stirling’s approximation log N! ≈ N log N – N, for large values of N, we get

L = (1/N) (N log N – N – N1 log N1 – N2 log N2 – N3 log N3 + N1 + N2 + N3 – N log 3)

= (1/N) (N log N – N1 log N1 – N2 log N2 – N3 log N3 – N log 3), since N1 + N2 + N3 = N

= (1/N) (- N1 log (3N1/N)- N2 log (3N2/N) – N3 log (3N3/N))

If we denote Pi  = (Ni /N) and Qi = 1/3, then we see that the above average log likelihood equals – ∑i Pi log2 (Pi / Qi), i.e. the negative of the KL Divergence.

Hence KL Divergence measures the negative of the average log likelihood of observing the joint probability distribution P(Class, Feature) given that the feature is equally probable in all the classes i.e. features are selected randomly with replacement from a feature space and put in documents without any information which class the document belongs to.

Combining prediction of two classifiers

In our problem of classifying a product review as “positive” or “negative”, we had earlier used Bayesian classifier to predict the latest review, using the already tagged older reviews as the training data. Instead of using Bayesian classifier alone, lets suppose we also used K-nearest Neighbours algorithm along with Naive Bayes. How would you combine the results obtained using Naive Bayes and KNN to get the best overall accuracy ? The task is much simpler when both the algorithms predicts the same output for a product review but the task becomes much more difficult when their outputs are different.

From the prediction data, for both Naive Bayes and KNN, scores above 0.75 are correct at-least 90% of the times. Given that a review is reported to be “positive” with score 0.77 by Naive Bayes and is reported to be “negative” with score 0.83 by KNN, which prediction we choose ? We do not have enough information to come to a conclusion right away. But now if I give the information that for scores between 0.75 and 0.85, KNN gives 40% incorrect predictions whereas Naive Bayes gives 20% incorrect predictions, then which prediction will you choose ? Although you are still uncertain but given the additional information, you are now more inclined to choose Naive Bayes prediction over KNN and you are right with very high chances.

We can further improve our chances of choosing the correct prediction, by finding the percentage of accurate predictions  based on the class information. In the above example, where Naive Bayes predicted “positive” and KNN predicted “negative”, instead of finding percentage of correct(incorrect) predictions between scores 0.75 and 0.85, for both Naive Bayes and KNN, find the percentage of correct(incorrect) predictions with “positive” tags between scores 0.75 and 0.85 for Naive Bayes and percentage of correct(incorrect) predictions with “negative” tags between scores 0.75 and 0.85 for KNN. Then do a direct comparison on the percentages. Using class information is helpful since the distribution of scores for different classes might be different hence taking an overall estimate is risky sometimes.

For example given a few thousand reviews to validate from the training set, each algorithm gives its own sequence of predictions and scores. Now using the actual sentiment tags associated with the reviews, create tables for each algorithm with rows being score ranges and columns “Positive Predicted”, “Positive Correct”, “Negative Predicted”, “Negative Correct”.

Naive Bayes Scores Positive Predicted Positive Correct Negative Predicted Negative Correct
0.10 – 0.15 10 1 13 1
0.75 – 0.80 25 22 18 14
0.95 – 1.00 8 8 5 5

Similarly for KNN :

KNN Scores Positive Predicted Positive Correct Negative Predicted Negative Correct
0.10 – 0.15 5 0 7 0
0.80 – 0.85 30 20 24 12
0.95 – 1.00 10 10 7 7

The entry NB 0.75-0.80, Positive Predicted would denote the number of reviews predicted as “positive” within the score range 0.75 and 0.80 for Naive Bayes, similarly NB 0.75-0.80, Positive Correct would denote the number of reviews “correctly” predicted as “positive” within the score range 0.75 and 0.80 for Naive Bayes and similarly for the KNN algorithm.

Then on applying both the algorithms on the test review, review is reported to be “positive” with score 0.77 by Naive Bayes and is reported to be “negative” with score 0.83 by KNN. From the above tables we see that for Naive Bayes predictions within the score ranges 0.75 and 0.80, it correctly predicts “Positive” reviews with 88% accuracy and for KNN predictions within the score range 0.80 and 0.85, it correctly predicts “Negative” reviews with 50% accuracy. Thus we classify the review as “Positive” since we have greater confidence in it. Although it might be that KNN performs better overall in terms of accuracy but locally their performance might differ.

We need not do the above calculations if both NB and KNN predicted the same class even if scores are different. In case where NB 0.75-0.80, Positive Predicted is very less, increase the window size to 0.70-0.80 or 0.75-0.85 or just take the average of both. In another variation, we can use a Decision Tree learner separately on the prediction scores where the input features to the Decision Tree would the predicted classes for each of the ensemble learners and the predicted scores from each of the learners and the target labels would be actual class labels.

In the Decision Tree variant, we would not have to manually select a constant small range of scores (like 0.80 to 0.85 and so on), the decision tree learner would automatically select ranges such that the entropy is minimized i.e. within each selected score range, the predicted scores of one of the classifiers would have much higher participation within the range than the other classifier, so that the predictions become easier. Use a 50-50 split on the validation samples. Train the Decision Tree learner on 50% of the training samples and validate on the remaining 50%. Will discuss in details on how to combine an ensemble of learners using C5.0 Decision Tree with Ada-boost.

Feature Selection using Chi Square Test

Given two random variables X and Y, we propose a null hypothesis H0 which states that the two variables are independent, i.e. the observed distribution of data due to the two random variables is due to chance alone, and then we propose an alternative hypothesis Ha that says the two variables are dependent and their observed distribution is not due to chance alone. Using Chi square test we reject one of the hypothesis and accept the other one.

Let us test the hypothesis that gender is independent of engineering as a stream of study. This is the null hypothesis. The random variable X takes on the values {Male, Female} and random variable Y takes on the values {Engineering, Non-Engineering}. Following data is available :

Gender/Stream Engineering Non/Engineering Total
Male 60 40 100
Female 30 70 100
Total 90 110 200

If any particular gender was selected randomly and the also whether he/she would study Engineering randomly. Then assuming independence, the expected number of males studying engineering would be 100*90/200 = 45. Similarly males in Non-engineering would expected to be 100*110/200 = 55, females in engineering = 45 and females in Non-engineering 55. Then the table for the expected values would be :

Gender/Stream Engineering Non/Engineering Total
Male 45 55 100
Female 45 55 100
Total 90 110 200

Given the observed and the expected data, chi square value is

χ2 = ∑ (Oi2 – Ei2)/Ei

where Oi denotes the observed values and Ei denotes the expected values. Hence

χ2 = (60-45)2/45 + (40-55)2/55 + (30-45)2/45 + (70-55)2/55 = 18.2

The chi square value alone does not give much information. We need to find the p-value (probability that X and Y are independent) using the χ2 value and the degrees of freedom. The degrees of freedom for given X and Y is the product (number of categories in X – 1) * (number of categories in Y – 1) = (2-1)*(2-1) = 1. Using the following site to compute the p-value, we get that p-value is almost 0, i.e. the choice of stream of study is dependent on the gender, hence null hypothesis is rejected. By convention, the “cutoff” point for a p-value is 0.05, anything below that can be considered a very low probability, while anything above it is considered a reasonable probability.

Coming back to our text classification problem, we propose the null hypothesis that presence/absence of a feature F in a document is independent of the class the document belongs to. Let X be the event that the term F is present in a document and Y be the event the document belongs to class c1 :

Feature/Class Belongs to class c1 Do not belong to class c1 Total
F present 30 170 200
F absent 70 730 800
Total 100 900 1000

And then the expected values assuming the presence/absence of feature F is independent of the class c1, we have :

Feature/Class Belongs to class c1 Do not belong to class c1 Total
F present 20 180 200
F absent 80 720 800
Total 100 900 1000

The chi square value is : χ2 = 6.944. Similarly for classes c2 and c3 , the chi square values are 62.5 and 62.5 respectively. Hence the expected chi square value for feature F is

χ2 (F) = P(c1) * 6.944 + P(c2) * 62.5 + P(c3) * 62.5,

P(ci) is the probability of a document belonging to class ci, thus

(1/10)*6.944 + (4/5)*62.5 + (1/10)*62.5 = 56.94,

and using the degrees of freedom = 1, we find that p-value is almost 0. Since p-value is much less than 0.005, we can safely reject the null hypothesis and say that feature F is probably dependent of the class a document belongs to. Thus feature F is a good discriminative feature.

Feature Selection using Mutual Information

Given two random variables X and Y, mutual information measures how much knowing one of these variables reduces uncertainty about the other. For example, if X and Y are independent, then knowing X does not give any information about Y and vice-verse, so their mutual information is zero. At the other extreme, if X is a deterministic function of Y and Y is a deterministic function of X then all information conveyed by X is shared with Y, i.e. knowing X determines the value of Y and vice-verse. If the variable X takes on the values {x1, x2, …} and Y takes on the values {y1, y2, …} then the mutual information of X and Y is denoted as the expected value of the point-wise mutual information i.e. log (P(X=xi, Y=yj)/P(X=xi)P(Y=yj))

MI (X, Y) = ∑ P(X=xi, Y=yj) * log (P(X=xi, Y=yj)/P(X=xi)P(Y=yj))        ∀ (X=xi, Y=yj)

All log values in further calculations are taken base 2.

If X and Y are mutually independent then for every (X=xi, Y=yj), P(X=xi, Y=yj) = P(X=xi)P(Y=yj) and hence log 1 = 0. Mutual Information is also related to the entropy (uncertainty of the random variables) :

MI (X, Y) = H(Y) – H(Y|X) = H(X) – H(X|Y)

where H(Y) is the entropy(uncertainty) of the variable Y, H(Y|X) is the remaining uncertainty in Y after knowing X. Hence H(Y) – H(Y|X) is the uncertainty in Y removed by knowing that X also occurred. Let X be the event the feature “Apple” occurs in a document, whereas Y be the event the feature “Steve Jobs” occurs in a document. If the document is about the Apple computers founded by Steve Jobs, then P(X, Y) denoting the joint probability distribution of X and Y, is greater than the product of the probabilities P(X)P(Y), because presence of the feature “Apple” would increase the chances of the feature “Steve Jobs” being present and vice-verse. Thus mutual information would be high in this case compared to if the document belonged to “Health benefits of fruits every day”. In terms of entropy, knowing that the feature “Apple” is present, makes the occurrence of the feature “Steve Jobs” much more certain than if “Apple” have not occurred and vice-verse. In the context of text classification, let X be the occurrence of the feature F in a document and Y be the event that a document belongs to class C. The point-wise mutual information, for document containing feature F and also belonging to class C.

PMI(F, C) = log (P(X=F, Y=C)/P(X=F)P(Y=C)),

Similarly the PMI for document containing feature F but not belonging to class C :  PMI(F, CT), document not containing feature F and belonging to class C : PMI(FT, C) and document not containing feature F as well as not belonging to class C : PMI(FT, CT), where X=FT denotes the event that document does not contain feature F and Y=CT denotes the event that document does not belong to class C :

PMI(F, CT) = log (P(X=F, Y=CT)/P(X=F)P(Y=CT)),

PMI(FT, C) = log (P(X=FT, Y=C)/P(X=FT)P(Y=CT)),

PMI(FT, CT) = log (P(X=FT, Y=CT)/P(X=FT)P(Y=CT)),

Thus the expected mutual information for feature F and class C is given as :

MI(F, C) = P(X=F, Y=C) * PMI(F, C) + P(X=F, Y=CT) * PMI(F, CT) + P(X=FT, Y=C) * PMI(FT, C) + P(X=FT, Y=CT) * PMI(FT, CT)

For multiple classes Ci, to check whether a feature F is discriminant or not, find the expected MI for feature F, using MI values of feature F with all the classes, i.e.

D(F) = ∑ P(Ci) * MI(F, Ci), where D(F) is the discrimination factor for feature F

From our earlier example with 1000 documents, 100 belong to c1, 800 to c2 and 100 to c3. Out of 1000 documents and 200 documents containing feature F, 30 belongs to class c1, 120 to class c2 and remaining 50 to class c3. Hence, for class c1,

PMI(F, c1) = 0.58, PMI(F, c1T) = -0.083, PMI(FT, c1) = -0.19, PMI(FT, c1T) = 0.02, Thus

MI(F, c1) = 0.00459

PMI(F, c2) = -0.41, PMI(F, c2T) = 1.0, PMI(FT, c2) = 0.087, PMI(FT, c2T) = -0.41, Thus

MI(F, c2) = 0.04076

PMI(F, c3) = 1.32, PMI(F, c3T) = -0.26, PMI(FT, c3) = -0.67, PMI(FT, c3T) = 0.06, Thus

MI(F, c3) = 0.0385

Thus the discriminant factor D(F) for the feature F is : D(F) = 0.037. We see that the feature F is more informative to classes c2 and c3, as compared to class c1

Mutual information is minimum (maximum uncertainty) when the feature F is equally probable in all the 3 classes and hence the feature information is independent of the class information. Mutual information is 0 when the feature is equally probable in all the classes and also the classes are equally distributed across the set of documents.

Mutual information is maximum (minimum uncertainty) when the feature is present in any one of the classes and thus these features becomes discriminative. In this example the feature F is slightly discriminative as its value is greater than 0 but we cannot say that it is highly discriminative of any particular class.

Machine Learning Fundamentals Part 2 : Bayesian Classification

Given the task of classifying a product review as “positive” or “negative”, first we need to train a model using already labelled samples. For example lets say that 1000 reviews are available with the tags “positive” or “negative” associated with each review. Now to train a model using the distribution of words in the sample reviews, compute the following probabilities P(w1), P(w2), … where P(wi) denotes the fraction of reviews containing the word wi i.e. if the word w1 occurs in 30 out of 1000 reviews, then P(w1) = 0.03. Now lets say that 800 out of 1000 reviews are labelled “positive” while remaining 200 are labelled as “negative”. Then P(+) = 0.8 and P(-) = 0.2.

What we need to estimate is, given an unlabelled review and the set of words w’1, w’2, …. in it, which is a subset of the words w1, w2, …. from our training samples, what is the probability of the review being “positive” (probability of it being “negative” would be 1 minus that), i.e. P(+|w’1, w’2, ….). From our earlier tutorial we saw that using Bayes’ Theorem:

P(+|w’1, w’2, ….) = (P(w’1, w’2, …. | +) * P(+))/P(w’1, w’2, ….)

Since the value P(w’1, w’2, ….) is just a scaling factor and is constant for both the categories “positive” and “negative”, we can omit that. P(+) is already known to be 0.8. We need to estimate the probability P(w’1, w’2, …. | +) i.e. given that a review is “positive” what is the likelihood of the review “generating” the words w’1, w’2, ….. Using the chain rule of probability :

P(w’1, w’2, …. | +) = P(w’1 | w’2, w’3, …., + ) * P(w’2 | w’3, w’4, …, +) * …* P(w’M|+)

From out training samples we can estimate each of the above product terms, but given a large number of terms, it will take a lot of time to find the above distribution for each review and will not be feasible for any real time prediction. Instead we assume that each word is independent of other words and each word is only dependent on the category that generated it i.e. P(w’1 | w’2, w’3, w’4, …., + ) = P(w’1 | +), P(w’2 | w’3, w’4, w’5, …., +) = P(w’2 | +), and so on for other words. This is the Naive Bayes assumption. With the above assumptions, we have :

P(w’1, w’2, …. | +) = P(w’1 | +) * P(w’2 | +) * …. P(w’M | +)

From the training samples we can easily estimate the above terms beforehand i.e. Given positive reviews only, P(w’1 | +) denotes what fraction of the positive reviews contains the word w’1. In fact, to account for words which are not present in “positive” reviews and subsequently avoid multiplying by a zero, the values P(w’i | +) are computed as

(1 + Number of positive reviews w’i occurs in)/(Number words + Number of positive reviews)

After computing the quantity P(w’1, w’2, …. | +) we can compute the quantity P(+|w’1, w’2, ….), which if greater than 0.6(or any other value greater than 0.5), we say that the review is positive. While for the negative case, we might say that if 1-P(+|w’1, w’2, ….) is less than 0.25, then the review is negative, else if it is between 0.25 and 0.6 then we say that we are not sure.

The above model can be easily extended to multiple classes instead of just two classes (positive and negative), for example in medical diagnosis, the classes can be the list of diseases, while counterparts for words will be the observed symptoms. Also in the above example, instead of just considering words, we can also consider N-grams i.e. sequence of words. Experimentally it is found that 2-grams or 3-grams perform better that single words (1-grams) in the overall accuracy of the classifier.

How to measure performance of a classifier : Accuracy vs. Efficiency

Many organizations face the task of classification, classifying reviews into positive and negative, classifying emails as spam and non-spam, classifying financial documents into loan types, classifying a set of symptoms to a disease etc. There are scientific measures to assess the performance of the classifier such as Precision, F-Score and Recall.

True positive (TP) for a category is the count of how many objects were predicted correctly for that category, False positive (FP) for a category is the count of how many objects were predicted for that category given that those objects actually belonged to a different category, True negative (TN) for a category is the count of how many objects were predicted into a different category given that those objects were actually belonging to a different category and similarly False negative (FN) for a category is the count of how many objects were predicted into a different category given that those objects were actually belonging to this category.

Precision defines the rate of correct prediction (accuracy) by the classifier for a category i.e. Given that reviews are predicted to be “positive”, what fraction of it are truly “positive” i.e. TP/(TP+FP). Similarly Recall defines the efficiency of the classifier for a category i.e. Given reviews are “positive”, what percentage are predicted correctly as “positive” i.e. TP/(TP+FN).

For example out of 100 reviews for a product, 80 reviews are positive and 20 are negative. Using a trained classifier to validate, out of the 80 truly positive reviews 60 are predicted as positive and 20 as negative, and out of the 20 truly negative reviews, 10 are predicted as positive and remaining 10 as negative. Then the precision of the classifier for “positive” reviews is 60/70 = 0.86 and recall of the classifier for “positive” reviews is 60/80 = 0.75. Similarly precision of classifier for “negative” reviews is 10/30 = 0.33 and recall 10/20 = 0.5.

Two different numbers for each category seems confusing to report to assess the performance of the classifier. While it makes more sense to use the precision value in a production scenario where the true labels are not available but similarly it makes more sense to report recall values to a client who is more concerned with knowing how many of the Loan Applications he has given has been correctly validated with the classifier. Hence we generally use a single measurement called F-Score that is the Harmonic mean of precision and recall

F-score = 2*precision*recall/(precision+recall).