-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance tips #92
Comments
My understanding is that there's a way to seed the parser with prior parser state that "warms" it up and speeds up the adaptation. Are you doing that before measuring? If not, that doesn't seem like a fair comparison against hand-written code. I've never done that before, however, so can't help you there, unfortunately. |
Just to be clear, hand written code is not optimized for a particular query. It is a parser for our language spec. We are benchmarking with the code that we expect we will need to run for any user specified query string. If this is not what you meant, i am interested in understanding more about seeding the parser a priori for any random query. I am currently benchmarking with below code. Am i doing something that can be moved out of the for loop. ?
I have tried |
Check out https://github.com/antlr/antlr4/blob/master/doc/faq/general.md,
it discusses some aspects of perf, although no mention of seeding. Sorry, I
can't recall where I saw that mentioned.
It seems from that document that your example is missing the Go equivalent
of this Java:
parser.getInterpreter().setPredictionMode(PredictionMode.SLL);
Does that help at all?
…On Tue, Dec 20, 2016 at 10:24 PM, Ashish Negi ***@***.***> wrote:
Just to be clear, hand written code is not optimized for a particular
query. It is a parser for our language spec. We are benchmarking with the
code that we expect we will need to run for any user specified query
string. If this is not what you meant, i am interested in understanding
more about seeding the parser a priori for any random query.
I am currently benchmarking with below code. Am i doing something that can
be moved out of the for loop. ?
for i := 0; i < b.N; i++ {
// q is the Query. b.N is a million
input := antlr.NewInputStream(q)
lexer := parser.NewGraphQLPMLexer(input)
stream := antlr.NewCommonTokenStream(lexer, 0)
p := parser.NewGraphQLPMParser(stream)
p.AddErrorListener(antlr.NewDiagnosticErrorListener(true))
p.BuildParseTrees = true
// uptill here we have a cost of : 15000 for q1
// next call makes it 100 times more costly to : 1800000
_ = p.Document()
}
I have tried p.BuildParseTrees = false and attaching a dummy listener,
but it seems to have same performance results.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#92 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAD5VnLpQpD2rQtx07UD4cq6eVn9f3T1ks5rKMYVgaJpZM4LSjnt>
.
|
Also note that that document mentions you should be warming up the parser
to get the best perf. I assume they mean by hand, just have it parse a
corpus beforehand. That's one way to "seed" the parser. ;) Does that make a
difference?
…On Tue, Dec 20, 2016 at 11:03 PM, Will Faught ***@***.***> wrote:
Check out https://github.com/antlr/antlr4/blob/master/doc/faq/general.md,
it discusses some aspects of perf, although no mention of seeding. Sorry, I
can't recall where I saw that mentioned.
It seems from that document that your example is missing the Go equivalent
of this Java:
parser.getInterpreter().setPredictionMode(PredictionMode.SLL);
Does that help at all?
On Tue, Dec 20, 2016 at 10:24 PM, Ashish Negi ***@***.***>
wrote:
> Just to be clear, hand written code is not optimized for a particular
> query. It is a parser for our language spec. We are benchmarking with the
> code that we expect we will need to run for any user specified query
> string. If this is not what you meant, i am interested in understanding
> more about seeding the parser a priori for any random query.
>
> I am currently benchmarking with below code. Am i doing something that
> can be moved out of the for loop. ?
>
> for i := 0; i < b.N; i++ {
> // q is the Query. b.N is a million
> input := antlr.NewInputStream(q)
> lexer := parser.NewGraphQLPMLexer(input)
> stream := antlr.NewCommonTokenStream(lexer, 0)
> p := parser.NewGraphQLPMParser(stream)
> p.AddErrorListener(antlr.NewDiagnosticErrorListener(true))
> p.BuildParseTrees = true
> // uptill here we have a cost of : 15000 for q1
> // next call makes it 100 times more costly to : 1800000
> _ = p.Document()
> }
>
> I have tried p.BuildParseTrees = false and attaching a dummy listener,
> but it seems to have same performance results.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#92 (comment)>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AAD5VnLpQpD2rQtx07UD4cq6eVn9f3T1ks5rKMYVgaJpZM4LSjnt>
> .
>
|
Check out ATNDeserializer/ATNSerializer. Not sure if that's it exactly, but
they look interesting. I see them in the Java runtime doc, at least, and I
believe there's at least an ATNDeserializer in the Go runtime.
…On Tue, Dec 20, 2016 at 11:04 PM, Will Faught ***@***.***> wrote:
Also note that that document mentions you should be warming up the parser
to get the best perf. I assume they mean by hand, just have it parse a
corpus beforehand. That's one way to "seed" the parser. ;) Does that make a
difference?
On Tue, Dec 20, 2016 at 11:03 PM, Will Faught ***@***.***>
wrote:
> Check out https://github.com/antlr/antlr4/blob/master/doc/faq/general.md,
> it discusses some aspects of perf, although no mention of seeding. Sorry, I
> can't recall where I saw that mentioned.
>
> It seems from that document that your example is missing the Go
> equivalent of this Java:
>
> parser.getInterpreter().setPredictionMode(PredictionMode.SLL);
>
> Does that help at all?
>
> On Tue, Dec 20, 2016 at 10:24 PM, Ashish Negi ***@***.***>
> wrote:
>
>> Just to be clear, hand written code is not optimized for a particular
>> query. It is a parser for our language spec. We are benchmarking with the
>> code that we expect we will need to run for any user specified query
>> string. If this is not what you meant, i am interested in understanding
>> more about seeding the parser a priori for any random query.
>>
>> I am currently benchmarking with below code. Am i doing something that
>> can be moved out of the for loop. ?
>>
>> for i := 0; i < b.N; i++ {
>> // q is the Query. b.N is a million
>> input := antlr.NewInputStream(q)
>> lexer := parser.NewGraphQLPMLexer(input)
>> stream := antlr.NewCommonTokenStream(lexer, 0)
>> p := parser.NewGraphQLPMParser(stream)
>> p.AddErrorListener(antlr.NewDiagnosticErrorListener(true))
>> p.BuildParseTrees = true
>> // uptill here we have a cost of : 15000 for q1
>> // next call makes it 100 times more costly to : 1800000
>> _ = p.Document()
>> }
>>
>> I have tried p.BuildParseTrees = false and attaching a dummy listener,
>> but it seems to have same performance results.
>>
>> —
>> You are receiving this because you commented.
>> Reply to this email directly, view it on GitHub
>> <#92 (comment)>, or mute
>> the thread
>> <https://github.com/notifications/unsubscribe-auth/AAD5VnLpQpD2rQtx07UD4cq6eVn9f3T1ks5rKMYVgaJpZM4LSjnt>
>> .
>>
>
>
|
Thanks. I had read that document, but missed the SLL part. Q. For seeding and warmup to work, will all query parsing have to go through same parser object ? Where does antlr4 store all that information ? Working on your suggestions :
Also, I think Luckily for me, If everything that i tried is ok, i still do not see much improvement.
|
I would look into ATNDeserializer. |
It doesn't hurt to ask in antlr/antlr4 as well. Sometimes grammars can be
inefficient as well. @parrt or @sharwell might be able to help, and would
definitely know whether ATNSerializer/Deserializer are what you're looking
for.
…On Wed, Dec 21, 2016 at 12:22 AM, Ashish Negi ***@***.***> wrote:
I would look into ATNDeserializer.
Thanks for your suggestions. We would definitely use ANTLR if we can bring
down numbers to 2-3x times that of current implementation.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#92 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAD5VkCq_OAdq_9W_yfhlpXlIVW6gUoDks5rKOHVgaJpZM4LSjnt>
.
|
I asked this in antlr/antlr4 page. We found that lexing is taking most of the time : with only lexing :
i found that lexing is taking most of the time : benchmark with only lexer for antlr :
with both lexer and parser :
I think that this is well known issue that lexing takes most of the time. |
No idea. I think you know more than I do at this point. Can you link to
your issue on antlr/antlr4?
…On Wed, Dec 21, 2016 at 3:03 AM Ashish Negi ***@***.***> wrote:
I asked this in antlr/antlr4 page. We found that lexing is taking most of
the time :
However, i am not sure to continue discussion there as it seems that
antlr4 github page is not right place for performance problems.
with only lexing :
func runAntlrParser(q string, b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
input := antlr.NewInputStream(q)
lexer := parser.NewGraphQLPMLexer(input)
// for only lexer benchmark
t := lexer.NextToken()
for t.GetTokenType() != antlr.TokenEOF {
t = lexer.NextToken()
}
}
}
i found that lexing is taking most of the time :
benchmark with only lexer for antlr :
/query$ gotb -test.run=XXX -v -benchtime=5s
BenchmarkQueryParse/spielberg:handwitten:-4 200000 45724 ns/op
BenchmarkQueryParse/spielberg:antlr:-4 5000 1468218 ns/op
BenchmarkQueryParse/tomhanks:handwitten:-4 500000 28649 ns/op
BenchmarkQueryParse/tomhanks:antlr:-4 5000 1538988 ns/op
BenchmarkQueryParse/nestedquery:handwritten:-4 100000 80210 ns/op
BenchmarkQueryParse/nestedquery:antlr:-4 5000 3029668 ns/op
PASS
ok github.com/dgraph-io/dgraph/query 63.546s
***@***.***:~/work/golang/src/github.com/dgraph-io/dgraph/query$ gotb -test.run=XXX -v -benchtime=5s
BenchmarkQueryParse/spielberg:handwitten:-4 300000 47772 ns/op
BenchmarkQueryParse/spielberg:antlr:-4 3000 1868297 ns/op
BenchmarkQueryParse/tomhanks:handwitten:-4 500000 27980 ns/op
BenchmarkQueryParse/tomhanks:antlr:-4 5000 1616518 ns/op
BenchmarkQueryParse/nestedquery:handwritten:-4 100000 74961 ns/op
BenchmarkQueryParse/nestedquery:antlr:-4 2000 3312977 ns/op
PASS
ok github.com/dgraph-io/dgraph/query 58.056s
I think that this is well known issue that lexing takes most of the time.
Where can i read more about bringing down the lexing time ?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#92 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAD5VnMDZ4bkAEuyHWVopMTIYkzrIXu2ks5rKQdkgaJpZM4LSjnt>
.
|
This is the issue : antlr/antlr4 issue |
It very well could be a perf issue with the Go antlr runtime. Have you tried benchmarking with another language target like Java or Python? If it is indeed just a Go runtime issue, it would be immensely helpful if you could profile your program to pinpoint the hotspot(s). |
I have not yet benchmarked other targets, but here is perf top10 results for 100 iteration of
|
After benchmarking with cpp target : Only lexing on
and with lexing and parsing : ~107 microseconds. [results are for 1 million iterations.]
So, we can fix this issue in Golang target itself. This is not a bad news. Here is the output of token stream and tree on cpp version just to be sure that it works properly :
|
Interesting. So there probably are perf gains to be had in the Go lexer.
I don't have the time right now to investigate, but hopefully someone does!
…On Thu, Dec 22, 2016 at 3:58 AM Ashish Negi ***@***.***> wrote:
After benchmarking
<https://github.com/ashishnegi/dgraph/blob/bench-antlr4/antlr4go/graphqlpmcpp/GraphQLPM.cpp>
with cpp target :
Only lexing on nestedquery query for 1 million times => average time is ~42
microseconds.
Same query in golang target takes on average ~3.3 milliseconds for lexing
; Around 800 times slower.
/graphqlpmcpp$ time ./graphqlexe
real 0m42.380s
user 0m42.368s
sys 0m0.000s
and with lexing and parsing : ~107 microseconds. [results are for 1
million iterations.]
/graphqlpmcpp$ time ./graphqlexe
real 1m46.194s
user 1m46.172s
sys 0m0.004s
So, we can fix this issue in Golang target itself. This is not a bad news.
Here is the output of token stream and tree on cpp version just to be sure
that it works properly :
/graphqlpmcpp$ ./graphqlexe
***@***.***,0:0='{',<10>,1:0]
***@***.***,1:5='debug',<14>,1:1]
***@***.***,6:6='(',<7>,1:6]
***@***.***,7:11='_xid_',<14>,1:7]
***@***.***,12:12=':',<12>,1:12]
***@***.***,14:22='"m.06pj8"',<13>,1:14]
***@***.***,23:23=')',<2>,1:23]
***@***.***,25:25='{',<10>,1:25]
***@***.***,26:62='type.object.name.enfilm.director.film',<14>,1:26]
***@***.***,64:64='(',<7>,1:64]
***@***.***,65:69='first',<14>,1:65]
***@***.***,70:70=':',<12>,1:70]
***@***.***,72:74='"2"',<13>,1:72]
***@***.***,75:75=',',<8>,1:75]
***@***.***,77:82='offset',<14>,1:77]
***@***.***,83:83=':',<12>,1:83]
***@***.***,84:87='"10"',<13>,1:84]
***@***.***,88:88=')',<2>,1:88]
***@***.******@***.***(',<1>,1:90]
***@***.***,98:102='anyof',<5>,1:98]
***@***.***,103:103='(',<7>,1:103]
***@***.***,104:124='"type.object.name.en"',<13>,1:104]
***@***.***,126:126=',',<8>,1:126]
***@***.***,128:138='"war spies"',<13>,1:128]
***@***.***,139:139=')',<2>,1:139]
***@***.***,141:142='&&',<4>,1:141]
***@***.***,144:148='allof',<6>,1:144]
***@***.***,149:149='(',<7>,1:149]
***@***.***,150:170='"type.object.name.en"',<13>,1:150]
***@***.***,171:171=',',<8>,1:171]
***@***.***,173:185='"hello world"',<13>,1:173]
***@***.***,186:186=')',<2>,1:186]
***@***.***,187:188='||',<3>,1:187]
***@***.***,190:194='allof',<6>,1:190]
***@***.***,195:195='(',<7>,1:195]
***@***.***,196:216='"type.object.name.en"',<13>,1:196]
***@***.***,217:217=',',<8>,1:217]
***@***.***,219:231='"wonder land"',<13>,1:219]
***@***.***,232:232=')',<2>,1:232]
***@***.***,233:233=')',<2>,1:233]
***@***.***,236:236='{',<10>,1:236]
***@***.***,237:325='_uid_type.object.name.enfilm.film.initial_release_datefilm.film.countryfilm.film.starring',<14>,1:237]
***@***.***,327:327='{',<10>,1:327]
***@***.***,328:349='film.performance.actor',<14>,1:328]
***@***.***,351:351='{',<10>,1:351]
***@***.***,352:370='type.object.name.en',<14>,1:352]
***@***.***,371:371='}',<11>,1:371]
***@***.***,372:397='film.performance.character',<14>,1:372]
***@***.***,399:399='{',<10>,1:399]
***@***.***,400:418='type.object.name.en',<14>,1:400]
***@***.***,419:419='}',<11>,1:419]
***@***.***,420:420='}',<11>,1:420]
***@***.***,421:435='film.film.genre',<14>,1:421]
***@***.***,437:437='{',<10>,1:437]
***@***.***,438:456='type.object.name.en',<14>,1:438]
***@***.***,457:457='}',<11>,1:457]
***@***.***,458:458='}',<11>,1:458]
***@***.***,459:459='}',<11>,1:459]
***@***.***,460:460='}',<11>,1:460]
***@***.***,461:460='<EOF>',<-1>,1:461]
(document (definition (selectionSet { (field debug (arguments ( (argument _xid_ : (value "m.06pj8")) )) (selectionSet { (field type.object.name.enfilm.director.film (arguments ( (argument first : (value "2")) , (argument offset : (value "10")) )) (filters @filter( (pair (funcName anyof) ( (fieldName "type.object.name.en") , (value "war spies") )) (filterOperation &&) (pair (funcName allof) ( (fieldName "type.object.name.en") , (value "hello world") )) (filterOperation ||) (pair (funcName allof) ( (fieldName "type.object.name.en") , (value "wonder land") )) )) (selectionSet { (field _uid_type.object.name.enfilm.film.initial_release_datefilm.film.countryfilm.film.starring (selectionSet { (field film.performance.actor (selectionSet { (field type.object.name.en) })) (field film.performance.character (selectionSet { (field type.object.name.en) })) })) (field film.film.genre (selectionSet { (field type.object.name.en) })) })) })) })))
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#92 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAD5VkHzYSsHy5FYZ21aZJDDE2nnycUfks5rKmXpgaJpZM4LSjnt>
.
|
First of all, thanks for your contribution to antlr4 go.
I have read on antlr4 golang target issues that you are using it in non trivial production work. We are also considering it for parsing our graph database language spec. :)
Our language spec is a variant of GraphQL.
We started benchmarking from the simplest subset grammar.
Benchmarks :
We expected these numbers to be under 0.05 ms. They are currently around 1.5 ms.
Here are comparisons with handwritten parser and antlr golang parser over practical queries :
Benchmarks :
Antlr4 is around 40x slower.
Is this expected ? Or are we doing something wrong ?
Can you give some performance tips ? How can be benchmark lexer and parser steps separately ?
The text was updated successfully, but these errors were encountered: