[Xapian-tickets] [Xapian] #804: Improve clustering API
Xapian
nobody at xapian.org
Wed Sep 2 00:37:56 BST 2020
#804: Improve clustering API
--------------------------+-------------------------------
Reporter: James Aylett | Owner: Olly Betts
Type: enhancement | Status: new
Priority: normal | Milestone: 1.5.0
Component: Library API | Version: git master
Severity: normal | Resolution:
Keywords: | Blocked By:
Blocking: | Operating System: All
--------------------------+-------------------------------
Old description:
> The clustering API we have is a reasonable first draft, but can be
> improved. Here are some initial thoughts, although not all of them
> necessarily will make things better.
>
> 1. Could do with iterators, as one of the common things is going to be to
> iterate over the clusters in the set, then over the documents inside it.
> My code looks very C-like at the moment :)
> 2. The public API doesn't do a huge amount, because it's unreachable. I
> can make all sorts of things out of !FreqSource, but I can't actually use
> them for clustering AFAICT. So I can't for instance create my own vector
> space and cluster within that — without subclassing !KMeans.
> 3. Access to original weight in !MSet — unless this is
> !PointType::get_weight(), in which case the docstring is misleading (says
> it's TF-IDF). Even then, it'd be nice to access the original order within
> the !MSet as well. Just looked at the code and it's calculating TF-IDF
> directly to compute term weight and magnitude. That's probably okay, but
> it feels a little odd to me that this happens in the Point constructor
> rather than in the !FreqSource.
> 4. There's only one similarity measure, but there doesn't seem to be a
> way to set another if I implemented my own.
> 5. I suspect that being able to specify a term prefix and only initialise
> the vector space on that would be helpful. You can do this via a stopper,
> but that's going to be less efficient if clustering lots of docs.
> 6. I wonder if we should convert to an integer-indexed (but sparse)
> vector space on intialisation. Using terms throughout is almost certainly
> slower? Changing that should mean that Point means a bit more to end
> users who are controlling things themselves, because they can build a
> vector space unrelated to terms.
> 7. I don't know if this is feasible, but it'd be nice given a Cluster to
> be able to get some stats about it, or at least stats about the Points
> within it. Distance from Centroid, for instance — which I can compute
> directly via the public API, but would be helpful sometimes. (For
> instance if you want to name a cluster, you can either run an algorithm
> to ponder that out of all points, or you can have topics for each
> document as doc data and just use the one closest to the centroid. I
> guess you could just use the first doc in the cluster though.)
New description:
The clustering API we have is a reasonable first draft, but can be
improved. Here are some initial thoughts, although not all of them
necessarily will make things better.
1. Could do with iterators, as one of the common things is going to be to
iterate over the clusters in the set, then over the documents inside it.
My code looks very C-like at the moment :)
''We could use the same pattern as `MSetIterator` and `ESetIterator` with
an iterator class holding a (ref-counted PIMPL) container object and
(size-index) - that means end iterator is cheap to construct as it's
(NULL,0), and a test against the end is just member_var == 0. We could
probably make these classes entirely inline. Related, it would be great
to support C++11 range for for all our iterators, which is fairly easy for
the trivial version that just gives you `operator*()` at each position,
but then you e.g. can't get wdf as well as the term name from a
`TermIterator` with a range-for loop, and even have to rewrite your loop
if you later do.''
2. The public API doesn't do a huge amount, because it's unreachable. I
can make all sorts of things out of `!reqSource`, but I can't actually use
them for clustering AFAICT. So I can't for instance create my own vector
space and cluster within that — without subclassing `KMeans`.
3. Access to original weight in `MSet` — unless this is
`PointType::get_weight()`, in which case the docstring is misleading (says
it's TF-IDF). Even then, it'd be nice to access the original order within
the `MSet` as well. Just looked at the code and it's calculating TF-IDF
directly to compute term weight and magnitude. That's probably okay, but
it feels a little odd to me that this happens in the Point constructor
rather than in the `FreqSource`.
4. There's only one similarity measure, but there doesn't seem to be a way
to set another if I implemented my own.
5. I suspect that being able to specify a term prefix and only initialise
the vector space on that would be helpful. You can do this via a stopper,
but that's going to be less efficient if clustering lots of docs.
''It's also more convenient to just specify the prefix. When the input is
`Document` termlists (as it currently has to be) then `skip_to(prefix)` is
just a `next()` loop internally because of how these are encoded on disk,
so the efficiency gain would mainly be that we could stop decoding once
we'd got past the prefix of interest. That's partly an implementation
detail - we could store data to allow actually skipping ahead in a
document termlist.''
6. I wonder if we should convert to an integer-indexed (but sparse) vector
space on intialisation. Using terms throughout is almost certainly slower?
Changing that should mean that Point means a bit more to end users who are
controlling things themselves, because they can build a vector space
unrelated to terms.
''For matching and in backends we stick with terms, but that's mostly
because otherwise we'd need to a way to account for differing numeric term
ids across databases and that isn't a concern here. Using strings may not
be as bad as you might guess since most terms are short and different
early on, and with integers you'd have to convert, but I think it's worth
trying with integers.''
7. I don't know if this is feasible, but it'd be nice given a Cluster to
be able to get some stats about it, or at least stats about the Points
within it. Distance from Centroid, for instance — which I can compute
directly via the public API, but would be helpful sometimes. (For instance
if you want to name a cluster, you can either run an algorithm to ponder
that out of all points, or you can have topics for each document as doc
data and just use the one closest to the centroid. I guess you could just
use the first doc in the cluster though.)
--
Comment (by Olly Betts):
I've added some thoughts (inline and in italic to the description).
One thing I'm quite unsure about is how general to try to make this -
should it just be able to cluster documents by terms, or cluster documents
by anything (e.g. cluster on value slots, or external data referenced by
docid, or some mixture)? And should it be able to cluster other things?
More general is potentially more useful, but risks complicating the API
and implementation, and because of that might end up being slower and
harder to use for the actual main intended use case.
--
Ticket URL: <https://trac.xapian.org/ticket/804#comment:1>
Xapian <https://xapian.org/>
Xapian
More information about the Xapian-tickets
mailing list