Hello Rishabh,<br><br>Good to hear from you. Its never late to jump-in for GSoC.<br><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_quote">
<div><b>Doubt1:</b></div><div><br></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><b>Feature Extraction/Selection:</b></div><div>The various datasets listed on MSR's LETOR have a limited set of features. Current implementation in xapian's LETOR has 5 features[tf,idf,doc_len,coll_tf,coll_len]. While algorithms for learning ranking models have been intensively studied, this is not the case for feature selection, despite of its importance. In a paper presented at SIGIR'07 [Tier1 in IR domain], the authors have highlighted the effectiveness of feature selection methods for ranking tasks.[<a href="http://research.microsoft.com/en-us/people/tyliu/fsr.pdf" target="_blank">link</a>] I believe that apart from the traditional/cliched IR features, we should<b> incorporate new features</b> to improve the performance of the LETOR module.</div>
</blockquote></div></blockquote><div><br>There is no point denying the fact that there is a need for more features. If you have noticed on the GSoC idea page of Letor, it says "The project can also include some work on the features, like adding support for more features,
selecting a subset of features, etc." Now the point comes, which features you want to incorporate. The Letor datasets are growing enormously in terms of number of features [Letor MSR 46 ->136 , Yahoo Dataset 700]. It would make sense to incorporate those features which can be tracked and suits the environment. More over majority of the features dwell around the IR measures like bm25, TF, IDF, LM and different combination of them for different part of the document. Some of the other features of Letor Datasets are number of outgoing links, number of incoming links, page rank, number of children [1,2]. These features are valid and available only in the linked data and moreover, straight forward to compute. Yahoo dataset does not even declare the features because of the proprietary issues. But I think it also includes some features like the age of the page, number of clicks on it, total time spent, and so on. <br>
<br>[1] <a href="http://research.microsoft.com/en-us/um/beijing/projects/letor/LETOR4.0/Data/Features_in_LETOR4.pdf">http://research.microsoft.com/en-us/um/beijing/projects/letor/LETOR4.0/Data/Features_in_LETOR4.pdf</a><br>
[2] <a href="http://research.microsoft.com/en-us/projects/mslr/feature.aspx">http://research.microsoft.com/en-us/projects/mslr/feature.aspx</a><br> </div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote"><blockquote style="margin:0pt 0pt 0pt 40px;border:medium none;padding:0px">
<div><br></div><div><b>Using unlabeled data:</b></div><div>Over the last 3-4 years a lot of papers have identified the importance of using unlabeled data to assist the task at hand by using it during feature extraction stage. Andrew Ng proposed a Self-Taught learning framework[ICML'07 <a href="http://ai.stanford.edu/%7Ehllee/icml07-selftaughtlearning.pdf" target="_blank">paper</a>] wherein they make use of unlabeled data to improve performance. A very recent <a href="http://eprints.pascal-network.org/archive/00008597/01/342_icmlpaper.pdf" target="_blank">paper at ICML'11</a> used the advantage of feature learning using unlabeled data and beat the state-of-the-art in sentiment classification.</div>
<div><br></div><div>Combining the above two points, I suggest an approach which uses features learnt from data in an unsupervised fashion "<b>in addition to</b>" the commonly used features.</div><div><b>Please note:</b> all this is in addition to the traditional features and finally we would be using <b>listwise/pairwise approaches</b>[ListMLE, et cetera] to train our models on the new set of features. Please let me know if this sounds good.</div>
</blockquote></div></blockquote><div><br>This phenomenon, Semi-supervised ranking, is indeed interesting. If you want to incorporate it, feel free to discuss the plan.<br><br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote"><blockquote style="margin:0pt 0pt 0pt 40px;border:medium none;padding:0px">
<div><br></div></blockquote><b>Doubt2:</b><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><b>Rank Aggregation:</b></div><div>Now that Xappian will have >1 Learning to rank algorithms, we should look into some kind of rank aggregation as well: combining outputs by various algorithms to get a final rank ordering for results. I went though a ECML'07 paper on unsupervised method for the same[<a href="http://l2r.cs.uiuc.edu/%7Edanr/Papers/KlementievRoSm07.pdf" target="_blank">link</a>]. I haven't yet completely understood their approach but will do so by the end of day.</div>
</blockquote></div></blockquote><div><br>Rank Aggregation, is another LTR approach with a set of ranked lists at hand for the query. At the moment Xapian can have 2 ranked list, BM25 and SVM based LTR scheme. I think these techniques will produce better results with the input of more number of ranked list than Xapian can offer at the moment. But it would be interesting to explore after some more ranking schemes incorporation.<br>
<br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_quote"><blockquote style="margin:0pt 0pt 0pt 40px;border:medium none;padding:0px">
</blockquote><div><div><br></div><div><b>Modularity:</b></div><div>Developing such modules in a modular fashion such that its not necessary to use all of them all the times, would be good. Whenever the user feels that in addition to basic features, he/she could use additional features, the feature extraction module could be plugged in. Same for rank aggregation.</div>
</div></div></blockquote><div><br>Agreed and that in fact, that will be the goal.<br><br>Best,<br>Parth. <br><br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_quote"><div>
<div><br></div><div><b>Relevant Background:</b></div><div>I have worked on few research oriented projects in Machine Learning, but most of them involved coding in Matlab/Java. More details about me: [<a href="http://www.rishabhmehrotra.com/index.htm" target="_blank">link</a>]. </div>
<div>I have been working on a project on Topic Modeling(using Latent Dirichlet Allocation) for Tweets. <a href="http://code.google.com/p/tweettrends/" target="_blank">Link</a> of the code on Google code. Also, I am involved in a collage project on building <b>focused crawler </b>& extending it to something like <a href="http://rtw.ml.cmu.edu/rtw/" target="_blank">NELL</a><far-fetched dream as of now :) >.[Google code <a href="http://code.google.com/p/bits-crawler/source/browse/" target="_blank">link</a>]</div>
<div><br></div><div>Please let me know how you feel about the above mentioned points [and/or if I am way off the track]. </div><div><br></div><div>Best,</div><div>Rishabh.</div></div></div>
<br>_______________________________________________<br>
Xapian-devel mailing list<br>
<a href="mailto:Xapian-devel@lists.xapian.org">Xapian-devel@lists.xapian.org</a><br>
<a href="http://lists.xapian.org/mailman/listinfo/xapian-devel" target="_blank">http://lists.xapian.org/mailman/listinfo/xapian-devel</a><br>
<br></blockquote></div><br>