Google的Web安全扫描工具:skipfish

Pythia ·
更新时间:2024-11-14
· 767 次阅读

  skipfish是一个Linux下的web安全扫描工具,支持自定义字典扫描,可学习字典规则,识别漏洞危险级别(sql注入、shell注入、xss),并且提供可视化的统计结果,题主通常与sqlmap 组合使用,对web应用进行上线前的安全扫描和检测。   官网:http://code.google.com/p/skipfish/ ,需翻墙。   安装 --------------------依赖安装-------------------- yum install -y openssl-devel openssl zlib-devel zlib libidn-devel libidn --------------------安装-------------------- wget https://skipfish.googlecode.com/files/skipfish-2.10b.tgz tar xzvf skipfish-2.10b.tgz cd skipfish-2.10b make   基本参数 -A user:pass      - use specified HTTP authentication credentials 使用特定的http验证 -F host=IP        - pretend that 'host' resolves to 'IP' -C name=val       - append a custom cookie to all requests 对所有请求添加一个自定的cookie -H name=val       - append a custom HTTP header to all requests 对所有请求添加一个自定的http请求头 -b (i|f|p)        - use headers consistent with MSIE / Firefox / iPhone 伪装成IE/FIREFOX/IPHONE的浏览器 -N                - do not accept any new cookies 不允许新的cookies --auth-form url   - form authentication URL --auth-user user  - form authentication user --auth-pass pass  - form authentication password --auth-verify-url -  URL for in-session detection Crawl scope options: -d max_depth     - maximum crawl tree depth (16)大抓取深度 -c max_child     - maximum children to index per node (512)大抓取节点 -x max_desc      - maximum descendants to index per branch (8192)每个索引分支抓取后代数 -r r_limit       - max total number of requests to send (100000000)大请求数量 -p crawl%        - node and link crawl probability () 节点连接抓取几率 -q hex           - repeat probabilistic scan with given seed -I string        - only follow URLs matching 'string' URL必须匹配字符串 -X string        - exclude URLs matching 'string' URL排除字符串 -K string        - do not fuzz parameters named 'string' -D domain        - crawl cross-site links to another domain 跨域扫描 -B domain        - trust, but do not crawl, another domain -Z               - do not descend into 5xx locations 5xx错误时不再抓取 -O               - do not submit any forms 不尝试提交表单 -P               - do not parse HTML, etc, to find new links 不解析HTML查找连接 Reporting options: -o dir          - write output to specified directory (required) -M              - log warnings about mixed content / non-SSL passwords -E              - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches -U              - log all external URLs and e-mails seen -Q              - completely suppress duplicate nodes in reports -u              - be quiet, disable realtime progress stats -v              - enable runtime logging (to stderr) Dictionary management options: -W wordlist     - use a specified read-write wordlist (required) -S wordlist     - load a supplemental read-only wordlist -L              - do not auto-learn new keywords for the site -Y              - do not fuzz extensions in directory brute-force -R age          - purge words hit more than 'age' scans ago -T name=val     - add new form auto-fill rule -G max_guess    - maximum number of keyword guesses to keep (256) -z sigfile      - load signatures from this file Performance settings: -g max_conn     - max simultaneous TCP connections, global (40) 大全局TCP链接 -m host_conn    - max simultaneous connections, per target IP (10) 大链接/目标IP -f max_fail     - max number of consecutive HTTP errors (100) 大http错误 -t req_tmout    - total request response timeout (20 s) 请求超时时间 -w rw_tmout     - individual network I/O timeout (10 s) -i idle_tmout   - timeout on idle HTTP connections (10 s) -s s_limit      - response size limit (400000 B) 限制大小 -e              - do not keep binary responses for reporting 不报告二进制响应 Other settings: -l max_req      - max requests per second (0.000000) -k duration     - stop scanning after the given duration h:m:s --config file   - load the specified configuration file   使用范例   以扫描 baidu.com 为例   /path/skipfish -d 2 -S dictionaries/minimal.wl -o baidu http://www.baidu.com   结果   附上刚刚执行扫描的截图,如果有相应安全漏洞,则会直观显示出来。

扫描结果页面



Web google 工具 web安全

需要 登录 后方可回复, 如果你还没有账号请 注册新账号
相关文章