You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
P.S. - I originally asked this question in discussion but didn't get any response so am assuming katana doesn't have this feature implemented yet
i have a file named sites.txt like this
https://google.com
https://example.com
Using katana i want to scan them however using the default -l flag won't work for me as i need to set crawl duration for each URL using -ct flag . I tried the following:
cat sites.txt | while read URL; do katana -u $URL -ct 10s; done
Ouput
$ cat foobar | while read URL; do katana -u $URL -ct 10s; done
__ __
/ /_____ _/ /____ ____ ___ _
/ '_/ _ / __/ _ / _ \/ _ /
/_/\_\\_,_/\__/\_,_/_//_/\_,_/
projectdiscovery.io
[INF] Current katana version v1.1.0 (latest)
[INF] Started standard crawling for => https://example.com
[INF] Started standard crawling for => https://google.com
As you can see from above ouput, katana starts to crawl both the sites immediately instead of waiting for first to finish or setting any rules this can cause many issues such as:
If there are multiple sites in the file say 1000. my system will crash immediatly since katana starts crawling all of them without any rules.
I checked whether my above approach give me desired result i.e. "crawl maximum for 10s for each site" but it didn't work. see output below
$ time cat sites.txt | while read URL; do katana -u $URL -ct 10s; done
__ __
/ /_____ _/ /____ ____ ___ _
/ '_/ _ / __/ _ / _ \/ _ /
/_/\_\\_,_/\__/\_,_/_//_/\_,_/
projectdiscovery.io
[INF] Started standard crawling for => https://google.com
[INF] Started standard crawling for => https://example.com
https://google.com
https://example.com
real 0m15.171s
user 0m0.089s
sys 0m0.027s
As you can see it took only approx 15s to execute the command but it must be > 20s. since i set 10s + 10s for both sites. so this approach didn't work.
I would be grateful if anyone could bless me with a fix for this.
The text was updated successfully, but these errors were encountered:
@ayushkr12 if katana doesn't find anything on example.com, it will simply skip and move to next URL even if you specify custom time with -ct 10s or any time.
@ayushkr12 if katana doesn't find anything on example.com, it will simply skip and move to next URL even if you specify custom time with -ct 10s or any time.
Yes but the behaviour is absolute with every domain not just example.com.
P.S. - I originally asked this question in discussion but didn't get any response so am assuming katana doesn't have this feature implemented yet
i have a file named sites.txt like this
Using katana i want to scan them however using the default
-l
flag won't work for me as i need to set crawl duration for each URL using-ct
flag . I tried the following:Ouput
As you can see from above ouput, katana starts to crawl both the sites immediately instead of waiting for first to finish or setting any rules this can cause many issues such as:
As you can see it took only approx 15s to execute the command but it must be > 20s. since i set 10s + 10s for both sites. so this approach didn't work.
I would be grateful if anyone could bless me with a fix for this.
The text was updated successfully, but these errors were encountered: