Skip to content

add Header support to filter? #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jiming opened this issue Jan 15, 2018 · 5 comments
Open

add Header support to filter? #1

jiming opened this issue Jan 15, 2018 · 5 comments

Comments

@jiming
Copy link

jiming commented Jan 15, 2018

Hi,

Kafka already support headers for record. Which is a very useful feature to support filter message?

What about add the support in the RecordFilter ?

Thanks!
Jiming

@Crim
Copy link
Contributor

Crim commented Jan 16, 2018

Hey Jiming!

So you're suggesting adjusting the interface to look something like:

    /**
     * Define the filter behavior.
     * A return value of TRUE means the record WILL be shown.
     * A return value of FALSE means the record will NOT be shown.
     *
     * @param topic Name of topic the message came from.
     * @param partition Partition the message came from.
     * @param offset Offset the message came from.
     * @param key Deserialized Key object.
     * @param value Deserialized Value object.
     * @param headers Headers associated with the record.
     * @return True means the record WILL be shown.  False means the record will NOT be shown.
     */
    boolean includeRecord(final String topic, final int partition, final long offset, final Object key, final Object value, final Map<String, byte[]> headers);

?

@Crim
Copy link
Contributor

Crim commented Jan 16, 2018

Actually it looks like there's no constraint on Kafka's current implementation of Headers to ensure that header keys are unique, so perhaps using a Map here is not the right answer.

@jiming
Copy link
Author

jiming commented Jan 16, 2018

@Crim ,

I think you can add a method without the key value, and this method is exeucted before includeRecord. In that case we can save the cpu to deserialize the key and value.

Usually the header contains much smaller values. In that case

boolean includeRecordByHeader(final String topic, final int partition, final long offset, final Map<String, byte[]> headers) {
    // you can use Headers instead of Map here
     if(headers.contains('isMan') && headers.get('isMan').equals("true")){
        return true;
    } else {
        return false;
    }

}

@Crim
Copy link
Contributor

Crim commented Jan 16, 2018

So the RecordFilter is invoked after the key & message have already been deserialized by the underlying kafka consumer, it's injected using Kafka's interceptor interface.

https://github.com/SourceLabOrg/kafka-webview/blob/master/kafka-webview-ui/src/main/java/org/sourcelab/kafka/webview/ui/manager/kafka/filter/RecordFilterInterceptor.java#L53

@jiming
Copy link
Author

jiming commented Jan 16, 2018

I see.

Then perhaps using your proposal is fine. Only change the Map to Headers. IMO used for filter is one major purpose of headers.

boolean includeRecord(final String topic, final int partition, final long offset, final Object key, final Object value, final Headers headers);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants