<div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
This is definitely interesting. However it's not terribly worth having this unless/until we have more than one clustering system to evaluate, is it? (Beyond the uniform/random one, although I guess if any clusterer performs _worse_ than that, it's a bad sign!)<br></blockquote><div><br></div><div>This module can still help us with the KMeans clusterer implemented and since I would like to implement a hierarchal clusterer, it could help in relative comparison too.<br></div><div>Before, I was looking at both internal as well as external evaluation techniques. But I guess a regular use case of this API will not provide a way to have ground truth labels for documents. Thus internal evaluation techniques would be the better option. I would thus like to change my approach and introduce a few internal clustering evaluation techniques. They are:<br><br></div><div>1) Silhouette coefficient<br></div><div>2) Dunn Index<br></div><div>3) Root Mean Square Standard Deviation<br>4) Calinski-Harabasz index</div><div>5) Davies-Bouldin index<br><br></div><div>I would be trying to aim at implementing this Performance analysis module by the end of the community bonding period. Also, once this is set up, it would be easier to code and evaluate newer clusterers, with minimal changes in API. <br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Do you think we'll need to implement several of these, for different uses? If not, is there a reason you think LSA will work best? You talk about eliminating words that occur rarely in documents — could we have a quick-and-dirty approach that looks at the within-corpus frequency of terms?<br></blockquote><div><br></div><div>I did try the quick-and-dirty approach that you mentioned, but looking at within-corpus frequency removes a lot of words that could otherwise add meaning. This will also be corpus-dependent, and hence a bad idea.<br></div><div>For now, removing stop words and the stemmed duplicates within the Document has helped, but it could be better to add functionality for semantic dimensionality reduction like LSA.<br></div><div>LSA would work best because over various other methods, it is a text mining tool rather than a statistical tool.<br></div><div>I'm not sure whether implementing more than one technique will be in the scope of GSoC, but I see no harm in believing that we could use more. So we could create a class like DimReduction and sub-class it for implementing various techniques.<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class="gmail-"><br>
<br>
</span>Do you have a particular approach you think is a good one? Are you thinking agglomerative or divisive?<span class="gmail-HOEnZb"><font color="#888888"><br></font></span></blockquote><div><br></div><div> I was thinking of agglomerative clustering where we start from individual clusters going to a cluster containing all documents.<br></div><div>This would be fairly simple to implement since we have the API fairly in place. We would only need to find a way to merge two clusters while going up the hierarchy tree.<br></div><div>So as we start off, we can initialize multiple clusters having their own documents with the Cluster class, and then at each step, merging upwards by merging two of the Cluster object contents into one and calculating new cluster centroids. We just need to do this iteratively till the number of clusters end up being one.<br><br></div><div>I will document all my findings from this conversation and my previous ideas into a proposal and send it soon.<br></div><div><br></div><div>Thanks.<br></div><div>Richhiey<br></div><div> </div></div><br></div></div>