Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance String Comparison Utilities #66

Open
jstammers opened this issue Oct 5, 2024 · 5 comments
Open

Enhance String Comparison Utilities #66

jstammers opened this issue Oct 5, 2024 · 5 comments

Comments

@jstammers
Copy link
Contributor

After looking over its documentation in more detail, I've seen that splink has some nice utilities for visualising string comparison metrics and creating comparison levels from templates.

In particular, I've found that defining comparison levels from thresholds of a similarity score is something that I regularly need to do, so having a template that constructs this from a set of thresholds would be handy.

@NickCrews I'd be happy to make a start on implementations of these, along with relevant documentation on their usage, if you think they would be useful generally.

As a side-note, I'm also interested in the concept of term-frequency adjustments, but am unsure how it would extend to strings that are sets of terms. Is this something you've come across?

@jstammers
Copy link
Contributor Author

Here's something I've been developing for a defining a LevelComparer using a comparison function and a set of thresholds. The basic idea is to use a factory function to create a MatchLevel subclass and dynamically create the cases required from the given UDF and thresholds. I haven't thought of a neat way to do this yet. Perhaps the simplest way is to require the user to specify the columns from the paris table

from __future__ import annotations

from mismo.compare._match_level import LevelComparer, MatchLevel
from typing import Callable, Any, Literal

def match_levels_factory(name, **levels) -> type:
    """A factory function for creating a MatchLevel class

    Examples
    --------
    >>> NameMatchLevel = match_levels_factory('NameMatchLevel', EXACT=0, NEAR=1, ELSE=2)
    >>> isinstance(NameMatchLevel, MatchLevel)
    True
    """
    return type(name, (MatchLevel,), levels)

class ThresholdComparer(LevelComparer):
    """Assigns a MatchLevel based on the threshold of a user-defined comparison function."""
    def __init__(self, name: str,
                 comparison_func: Callable[[Any, Any], float],
                 thresholds: list[float],
                 *,
                 representation: Literal["string", "integer"] = "integer",
                 add_null_level: bool = True,
                 add_exact_level: bool = True,):
        self.thresholds = thresholds
        self.fname = comparison_func.__name__
        levels = {...}
        match_level = match_levels_factory(self.fname+"Level", **levels,  **threshold_levels)
        cases = [...]
        super().__init__(name=name,
                         levels=match_level,
                         cases=cases,
                         representation=representation)

    def __repr__(self):
        return f"ThresholdComparer(name={self.name}, func={self.fname}, thresholds={self.thresholds})"

@NickCrews
Copy link
Owner

There are many different topics in here I think. Let's try to keep this issue in the topic of "emulating/recreating/improving on splink's eg LevenshteinAtThresholds" comparison function.

First, I want to be careful, because I am skeptical of this methodology. Arbitrary user-chosen thresholds are almost never going to be optimal (I think based on just a few personal attempts from more than a year ago, maybe I'm misremembering, if you disagree then please chime in). So, I don't want to steer people towards doing that. I think it would be better if the API steered users towards a more data-informed usage. eg we showed a histogram of scores, or showed warnings if the chosen thresholds had a poor distribution of pairs (eg 99.99 percent of all pairs inside one threshold), or even did fellegi-sunter with several trial thresholds to determine which gave best separation, or SOMETHING so that users didn't just guess some thresholds and shoot themselves in the foot.

@NickCrews
Copy link
Owner

Second, I'm curious if there is a good reason for creating classes dynamically. Should you really only know the thresholds at runtime? IDK if you have ever used sqlalchemy, they have two APIs, one declarative API that uses class definitions as I have emulated with the ComparisonLevel API, and then they have an imperative API that is what looks like what you are suggesting.

If we are going to be running experiments over the data to find the optimal thresholds, then we will be encoding the thresholds in our source code anyway, known before runtime. In that case, I don't think we need the imperative API at all??

If there's a good reason for the imperative API that you can think of can you share it?

@NickCrews
Copy link
Owner

I guess a few questions I have for you

  1. What problems have you encountered using splink's API? Eg where are places we could do better? What has been absolutely needed?
  2. Do you have other features available so we could do FS learning to auto choose levels in any way??
  3. Do you have any other ideas on how we can guide the user towards choosing the right thresholds? I at least want to feel like we have explored this space before we rush to reimplement splink's API exactly as it is.

@NickCrews
Copy link
Owner

Finally, thank you for your time, and sorry I'm not giving you an easy straightforward solution:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants