Hi Parth,<div><br><div>Yes, I'll be using the features learnt using deep learning techniques will be in addition to the traditional IR features and all of these features(traditional+deep learning based features learnt in unsupervised manner) will be fed to the ListMLE algorithm(which I will be implementing). The Deep Learning algorithms are able to capture, to some extent, the underlying generative factors that explain the variations in the input data, and thus the learned representations help in disentangling the underlying factors of variation. Such kind of features have been used in this<a href="http://eprints.pascal-network.org/archive/00008597/01/342_icmlpaper.pdf"> ICML'11 paper</a> for domain adaptation in text articles.</div>
<div><br></div><div>I discussed the same setting (as I proposed in my last mail) with <a href="http://research.microsoft.com/en-us/people/hangli/">Hang Li</a> sir(the author of various Learning to Rank algorithms) and he says: <br>
"<i>To me, ListMLE is a very elegant model and since it is a log linear model, it appears to have a good match with deep learning.</i>" </div><div>[Parth: I have forwarded the mail to you for your kind reference.]</div>
<div><br></div><div>To sum up, in addition to the a subset of 134 of MSR's LETOR features, we'll use the unsupervised deep learning features to train our model based on ListMLE algorithm. As each of this would be implemented in a modular fashion, if we get bad results(which shouldn't happen), we'll eliminate that module(deep learning one) and keep the remaining; if we get better results(which we should), we'll have a research contribution to make to the community because Deep Learning techniques haven't been applied to ranking systems yet.</div>
<div><br></div><div>Regards,</div><div>Rishabh.</div><div><br><br><div class="gmail_quote">On Wed, Apr 4, 2012 at 1:19 PM, Parth Gupta <span dir="ltr"><<a href="mailto:parthg.88@gmail.com">parthg.88@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Rishabh,<br><br>If I have understood correctly ( thought I am not sure), do you want to extract features using some of the deep learning techniques in the semi-supervised setting of Letor. Can you give me some idea about what these features are?<br>
<br>Are you planning to implement ListMLE in addition to this?<span class="HOEnZb"><font color="#888888"><br><br>Parth.</font></span><div class="HOEnZb"><div class="h5"><br><br><div class="gmail_quote">On Mon, Apr 2, 2012 at 10:03 PM, Rishabh Mehrotra <span dir="ltr"><<a href="mailto:erishabh@gmail.com" target="_blank">erishabh@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thanks Parth for your inputs. I had gone through the 134 or so feature list at MSR's LETOR site for the ranking algorithms. The traditional IR features I was talking about in my previous mail were referring to those ones. <div>
<br></div><div>Thanks for the heads-up on the unsupervised features part. I will be more inclined on using one of the recently famous <b>deep learning architectures</b> for unsupervised feature extraction simply because they have been shown to outperform hand-crafted feature based state-of-the-art algorithms. PFA a snapshot<Deep Learning.png> from one of the tutorials by Andrew Ng highlighting the same fact.</div>
<div><br></div><div>I would like to discuss about the probable methodology regarding the same.</div><div><br></div><div><b>Methodology:</b></div><div>In order to do unsupervised feature extraction from the documents, I'll use Denoising Stacked Autoencoders. Its a 2 stage process:</div>
<div><ul><li>Unsupervised pre-training</li><li>Supervised fine tuning</li></ul></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><b><u>Unsupervised pre-training:</u></b></div>
</blockquote><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div>The denoising autoencoders[<a href="http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity" target="_blank">link</a>] can be stacked to form a deep network by feeding the latent representation (output code) of the denoising auto-encoder found on the layer below as input to the current layer. The <b>unsupervised pre-training</b> of such an architecture is done one layer at a time. Each layer is trained as a denoising auto-encoder by minimizing the reconstruction of its input (which is the output code of the previous layer). Once the first <b>n</b> layers are trained, we can train the <b>n+1-th</b> layer because we can now compute the code or latent representation from the layer below. </div>
<div>The kind of representation learnt by this step is a rich representation which has shown to be better than hand-crafted features/representations.</div></blockquote><blockquote style="margin:0 0 0 40px;border:none;padding:0px">
<div><br></div></blockquote><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><b><u>Supervised fine tuning:</u></b></div></blockquote><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div>Once all layers are pre-trained, the network goes through a second stage of training called fine-tuning. Generally we consider supervised fine-tuning where we want to minimize prediction error on a supervised task. </div>
<div><b>For our problem statement(ranking):</b> we would use the features generated in the unsupervised pre-training stage and add the previously used IR features(a subset of the 136 LETOR features) and feed this to ListMLE or other Learning to Rank algorithm. </div>
<div><br></div></blockquote></blockquote><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div>To sum up, we would perform unsupervised pre-training to get features for all the documents(unlabeled as well as labeled) and then proceed to do supervised fine tuning only on the labeled data/documents. This way we would use both labelled and unlabeled data to learn features to represent documents. I have worked a bit on a related project, so the integration of such a combination (Unsupervised feature extraction+ListMLE) into xapian shouldn't be a tough asking(hopefully) as a GSoC project.</div>
</blockquote><div><br></div><div>Your inputs are welcome. Thanks for your time.</div><div><br></div><div>-</div><div>Rishabh.</div><div><br></div><div>PS: A lot of hot-research is going on in the Deep Learning field. Here's a <a href="http://deeplearningworkshopnips2011.wordpress.com/" target="_blank">link</a> to a NIPS'11(Tier-1 in ML) workshop on the same for everyone's reference.</div>
<div><br></div><div><div><div></div><div><br><div class="gmail_quote">On Mon, Apr 2, 2012 at 3:52 PM, Parth Gupta <span dir="ltr"><<a href="mailto:parthg.88@gmail.com" target="_blank">parthg.88@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello Rishabh,<br><br>Good to hear from you. Its never late to jump-in for GSoC.<br><br><div class="gmail_quote"><div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="gmail_quote">
<div><b>Doubt1:</b></div><div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><b>Feature Extraction/Selection:</b></div><div>The various datasets listed on MSR's LETOR have a limited set of features. Current implementation in xapian's LETOR has 5 features[tf,idf,doc_len,coll_tf,coll_len]. While algorithms for learning ranking models have been intensively studied, this is not the case for feature selection, despite of its importance. In a paper presented at SIGIR'07 [Tier1 in IR domain], the authors have highlighted the effectiveness of feature selection methods for ranking tasks.[<a href="http://research.microsoft.com/en-us/people/tyliu/fsr.pdf" target="_blank">link</a>] I believe that apart from the traditional/cliched IR features, we should<b> incorporate new features</b> to improve the performance of the LETOR module.</div>
</blockquote></div></blockquote></div><div><br>There is no point denying the fact that there is a need for more features. If you have noticed on the GSoC idea page of Letor, it says "The project can also include some work on the features, like adding support for more features,
selecting a subset of features, etc." Now the point comes, which features you want to incorporate. The Letor datasets are growing enormously in terms of number of features [Letor MSR 46 ->136 , Yahoo Dataset 700]. It would make sense to incorporate those features which can be tracked and suits the environment. More over majority of the features dwell around the IR measures like bm25, TF, IDF, LM and different combination of them for different part of the document. Some of the other features of Letor Datasets are number of outgoing links, number of incoming links, page rank, number of children [1,2]. These features are valid and available only in the linked data and moreover, straight forward to compute. Yahoo dataset does not even declare the features because of the proprietary issues. But I think it also includes some features like the age of the page, number of clicks on it, total time spent, and so on. <br>
<br>[1] <a href="http://research.microsoft.com/en-us/um/beijing/projects/letor/LETOR4.0/Data/Features_in_LETOR4.pdf" target="_blank">http://research.microsoft.com/en-us/um/beijing/projects/letor/LETOR4.0/Data/Features_in_LETOR4.pdf</a><br>
[2] <a href="http://research.microsoft.com/en-us/projects/mslr/feature.aspx" target="_blank">http://research.microsoft.com/en-us/projects/mslr/feature.aspx</a><br> </div><div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote"><blockquote style="margin:0pt 0pt 0pt 40px;border:medium none;padding:0px">
<div><br></div><div><b>Using unlabeled data:</b></div><div>Over the last 3-4 years a lot of papers have identified the importance of using unlabeled data to assist the task at hand by using it during feature extraction stage. Andrew Ng proposed a Self-Taught learning framework[ICML'07 <a href="http://ai.stanford.edu/%7Ehllee/icml07-selftaughtlearning.pdf" target="_blank">paper</a>] wherein they make use of unlabeled data to improve performance. A very recent <a href="http://eprints.pascal-network.org/archive/00008597/01/342_icmlpaper.pdf" target="_blank">paper at ICML'11</a> used the advantage of feature learning using unlabeled data and beat the state-of-the-art in sentiment classification.</div>
<div><br></div><div>Combining the above two points, I suggest an approach which uses features learnt from data in an unsupervised fashion "<b>in addition to</b>" the commonly used features.</div><div><b>Please note:</b> all this is in addition to the traditional features and finally we would be using <b>listwise/pairwise approaches</b>[ListMLE, et cetera] to train our models on the new set of features. Please let me know if this sounds good.</div>
</blockquote></div></blockquote></div><div><br>This phenomenon, Semi-supervised ranking, is indeed interesting. If you want to incorporate it, feel free to discuss the plan.<br><br></div><div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote"><blockquote style="margin:0pt 0pt 0pt 40px;border:medium none;padding:0px">
<div><br></div></blockquote><b>Doubt2:</b><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><b>Rank Aggregation:</b></div><div>Now that Xappian will have >1 Learning to rank algorithms, we should look into some kind of rank aggregation as well: combining outputs by various algorithms to get a final rank ordering for results. I went though a ECML'07 paper on unsupervised method for the same[<a href="http://l2r.cs.uiuc.edu/%7Edanr/Papers/KlementievRoSm07.pdf" target="_blank">link</a>]. I haven't yet completely understood their approach but will do so by the end of day.</div>
</blockquote></div></blockquote></div><div><br>Rank Aggregation, is another LTR approach with a set of ranked lists at hand for the query. At the moment Xapian can have 2 ranked list, BM25 and SVM based LTR scheme. I think these techniques will produce better results with the input of more number of ranked list than Xapian can offer at the moment. But it would be interesting to explore after some more ranking schemes incorporation.<br>
<br></div><div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><blockquote style="margin:0pt 0pt 0pt 40px;border:medium none;padding:0px">
</blockquote><div><div><br></div><div><b>Modularity:</b></div><div>Developing such modules in a modular fashion such that its not necessary to use all of them all the times, would be good. Whenever the user feels that in addition to basic features, he/she could use additional features, the feature extraction module could be plugged in. Same for rank aggregation.</div>
</div></div></blockquote></div><div><br>Agreed and that in fact, that will be the goal.<br><br>Best,<br>Parth. <br><br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div class="gmail_quote"><div>
<div><br></div><div><b>Relevant Background:</b></div><div>I have worked on few research oriented projects in Machine Learning, but most of them involved coding in Matlab/Java. More details about me: [<a href="http://www.rishabhmehrotra.com/index.htm" target="_blank">link</a>]. </div>
<div>I have been working on a project on Topic Modeling(using Latent Dirichlet Allocation) for Tweets. <a href="http://code.google.com/p/tweettrends/" target="_blank">Link</a> of the code on Google code. Also, I am involved in a collage project on building <b>focused crawler </b>& extending it to something like <a href="http://rtw.ml.cmu.edu/rtw/" target="_blank">NELL</a><far-fetched dream as of now :) >.[Google code <a href="http://code.google.com/p/bits-crawler/source/browse/" target="_blank">link</a>]</div>
<div><br></div><div>Please let me know how you feel about the above mentioned points [and/or if I am way off the track]. </div><div><br></div><div>Best,</div><div>Rishabh.</div></div></div>
<br></div>_______________________________________________<br>
Xapian-devel mailing list<br>
<a href="mailto:Xapian-devel@lists.xapian.org" target="_blank">Xapian-devel@lists.xapian.org</a><br>
<a href="http://lists.xapian.org/mailman/listinfo/xapian-devel" target="_blank">http://lists.xapian.org/mailman/listinfo/xapian-devel</a><br>
<br></blockquote></div><br>
</blockquote></div><br><br clear="all"><div><br></div></div></div><font color="#888888">-- <br>Rishabh.<br><br>
</font></div>
</blockquote></div></div></div></blockquote></div><br>
</div></div>