Hacking
August 29, 2022

The best tools for finding website vulnerabilities (Detailed Guide)

Free detailed website vulnerabilities encyclopedia for our favorite subscribers.

Hey Freaks! Not always the most expensive bugs found as part of a bug bounty are classics like XSS / SQL or errors in the application logic. Log leaks, access to critical files and services without authorization, session and authorization tokens, source codes and repositories - all this can be used by attackers for successful attacks, which is why they pay well for such things.

Today, by your numerous requests, I will tell you about useful tools for finding website vulnerabilities.

For example, Snapchat's API access token was recently leaked. The vulnerability was valued at $15,000 , with the bughunter himself expecting only informative status and a more modest reward.

From my experience and the experience of my colleagues, I can say that even for server-status , phpinfo or log files, the payout can reach $1,000. Well, you say, but looking for all this manually is quite tedious and unproductive, and surely someone else has already found all the bugs. Imagine, far from it! Bugs are everywhere, companies update or roll out new services, developers forget to turn off debug logs or delete repositories.


Introduction

Hacking always starts with OSINT or collecting data about the target. Success in finding a bug depends on the quality of the information found. Unlike legal hacking, thousands of people can participate in the bugbounty program, which, as you understand, suggests that finding a bug is not so easy. In today's article, I will introduce you to the best tools to help gather information about the target before you start hacking.Read more paid article for free : Here


The basis for finding vulnerabilities

To more effectively search for vulnerabilities, we need to monitor changes on the company's perimeter - this will allow us to quickly detect new services or new versions of web applications. Also, monitoring can be useful if you have a team - you can exchange information when hunting together.

Serpico

The first thing we need is a storage system and descriptions of bugs. At one time I used Dradis, but now I prefer Serpico. This program allows you to enter bugs by classification and immediately pulls up their description. All this is perfectly customizable and scalable for teamwork.

The framework works in Linux/Win environment, there is a Docker container for deployment.

Serpico page on GitHub


nmap-bootstrap-xsl

No, reader, I don't consider you a mother's hacker who doesn't know about Nmap. But we will also use a great add-on for working with scan logs nmap-bootstrap-xsl . It allows you to convert scan results to HTML with a user-friendly interface and selectability. This can be very handy when comparing scan results.

  • Download nmap-bootstrap.xsl and start scanning:

nmap -sS -T4 -A -sC -oA scanme --stylesheet https://raw.githubusercontent.com/honze-net/nmap-bootstrap-xsl/master/nmap-bootstrap.xsl scanme.nmap.org

scanme2.nmap.org

  • Next, we convert the scan output to HTML:

xsltproc -o scanme.html nmap-bootstrap.xsl scanme.xml

Nmap-bootstrap-xsl page on GitHub


Sparta

As an alternative to the previous addon, you can use the Sparta scanner-combine, which is included in most popular security distributions.

This utility is a wrapper for Nmap, Hydra and other popular tools. Using Sparta, it is convenient to carry out surface reconnaissance of the perimeter:

  • collect "live" hosts;
  • brute popular services found on the perimeter, such as FTP and SSH;
  • collect web applications and screenshots.

Sparta page on GitHub


badKarma

Another harvester for "shelling" the network perimeter. This tool is more advanced than Sparta and allows you to combine multiple actions in one interface. Although badKarma is still a "raw" product, nevertheless it is actively developed and there is hope that we will get another framework soon.

Keep in mind, there is no automatic installation and some utilities for badKarma to work in Kali Linux are not enough.

badKarma GitHub Page


Dictionaries

Before you start reconnaissance, you should stock up on a pack of good dictionaries. A lot depends on the choice of a good dictionary: the more hidden parameters, subdomains, directories and files are collected, the higher the chance of discovering some kind of security hole.

You can find a huge number of dictionaries on the Internet, but not all of them are effective. I have identified for myself several very interesting options that have helped me out more than once and helped to discover places that others have not yet reached.


fuzz.txt

I always start with fuzz.txt, which contains a list of potentially dangerous files and directories. The dictionary is updated almost every month with new words. It goes quickly, and due to this, you can quickly start picking the finds and in parallel put it through other, more voluminous lists.

The dictionary contains 4842 words, but, according to experience, it is great for initial research.

Download from GitHub


SecLists

SecLists is a whole collection of dictionaries that are very useful not only in bugbounty, but also in hacking. Dictionaries include usernames, passwords, URL parameters, subdomains, web shells, and more.

I highly recommend taking a little time to study the contents of the collection in detail.

Download from GitHub


Assetnote Wordlists

Another great collection of dictionaries for discovering all sorts of content and subdomains. Dictionaries are generated on the 28th of each month using commonspeak2 and GitHub Actions .

In addition to automatically generated selections, the site also has manual dictionaries created using Google BigQuery.

Project site Assetnote Wordlists


Generating your own dictionaries

Often you have to generate your own dictionaries. Writing a script, of course, is not difficult, but why reinvent the wheel?

There are many tools for generating dictionaries, but of the many, I use Pydictor . Toolza offers a wide range of features that make it possible to create the perfect dictionary for almost any situation.

In addition, Pydictor can compare files, count the frequency of words, and combine multiple dictionaries into one.

Let's take a look at an example. Suppose we know that the password is a modified version of the word Password and can contain:

  • instead of an @ sign;
  • instead of o0;
  • at the end of one to three numbers.

Such a dictionary is generated using the following command:

./pydictor.py --conf '[P[a,@]{1,1}ssw[o,0]{1,1}rd[0-9]{1,3}' --output /home/kali/Desktop/pydict

Here, < none > means that the utility does not need to do anything else with the wildcard combination.

At the end, Pydictor displays a short summary of the generation process. Not that very important info, but the care of the developer is felt.


Collection of information

Aquatone

Aquatone is a suite of domain name exploration tools. With the help of open sources, it is able to detect subdomains on a given domain, but you can also do a complete enumeration of options.

Once subdomains are detected, Aquatone can scan hosts for normal web ports, with HTTP headers, HTML bodies, and screenshots collected and consolidated into a report for easy attack surface analysis.

https://github.com/michenriksen/aquatone


asset note

Whoever got up first - that and slippers. This rule is especially relevant, both in hacking and bag hunting in particular. The Assetnote utility will notify us when there are new subdomains for the tracked target. To do this, you need to add the Pushover service API key.

When a new subdomain is found, you will receive a notification on your mobile phone. After that, you need to run as fast as you can to the computer and look for bugs.

https://github.com/tdr130/assetnote


Meg, MegPlus and Smith

Meg is one of the best tools for finding valuable information. It contains a whole set for searching. With Meg, you can explore many domains and subdomains in a short period of time looking for something specific (like server-status).

Meg also knows how to work with the list of bug bounty programs from h1. I advise you to familiarize yourself with the video presentation of the author of this wonderful tool.

MegPlus & Smith by EdOverflow are wrappers for the original Meg.

Meg+ (unfortunately, deprecated) adds the following features to search and detect vulnerabilities to the main set of Meg functions:

  • subdomains (using Sublist3r);
  • configs;
  • interesting lines;
  • open redirect;
  • CRLF injections;
  • CORS misconfigs;
  • path-based XSS;
  • capture of domains and subdomains.

The Smith utility allows you to parse Meg results to find needles in stacks of different finds.

https://github.com/tomnomnom/meg
https://github.com/EdOverflow/megplus
https://github.com/EdOverflow/smith


Finding Information in Public Repositories

Often, on the public repositories of companies, you can find useful information for gaining access to a particular application, even without analyzing the posted code. For example, the identified auth-token brought the bughunter $15,000!

Another good practice that has already paid off is parsing GitHub accounts from LinkedIn profiles of company employees.

So, our target is github.com. If you don't have an account, create one. It will help you at least search by content, and as a maximum you will start writing code. Well, or at least fork.


Github Hunter

This tool allows you to automate the search for GitHub repositories: we fill in the keywords and payloads for the search, the mail account and the address to which you want to send the result. After a long search, we get .db with the found repositories, files and lines of code in which the keywords were found.

https://github.com/Hell0W0rld0/Github-hunter


gitleaks

This utility allows you to identify critical data in a specific repository or from a specific user. Must have in case you know where to look but don't know what.

https://github.com/zricethezav/gitleaks


Port scanners

Walking through all the ports in search of interesting things is a sweet thing. If you can find something that hasn't been touched before, even better!

At the same time, do not forget that even outwardly harmless ports can hide something that is not quite expected. For example, I found an HTTP service on port 22: you can’t even access it with a browser, only through curl or wget !

If the scope is not particularly large, then Nmap is suitable for scanning, which definitely needs no introduction.

But what if there are a lot of hosts? Although Nmap is a powerful tool, it has a significant drawback - it is slow. An alternative, but not a competitor, is masscan : it is fast, but not as functional as Nmap. To make port scanning really fast and efficient, you can use both scanners together. How? I'll show you now!


mass map

MassMap allows you to scan a large number of IP addresses with the speed of masscan and the thoroughness of Nmap. MassMap is written in Bash, so you don't have to compile anything to use it.

Before starting the scan, the script will check the availability of everything necessary for work, and if something is missing, it will automatically install it.

The algorithm is simple: first, masscan scans all 65,535 TCP ports using the list of IP addresses passed. After that, Nmap goes through the found open ports (including using scripts), giving out already extended information for each.

The result can be saved in a human-readable format:

Since the script is actually a wrapper over the scanners, you can change any parameters, add some of your own tools, in general, create and improve!

Download MassMap from GitHub


Dnmasscan

Dnmasscan is another Bash script for automatically resolving domain names and then scanning them with masscan. Since masscan does not accept domain names, the script creates a file that contains the IP addresses of domains.

Download Dnmasscan from GitHub


Vulnerability Scanners

Sometimes on the surface you can find banal web vulnerabilities that, for one reason or another, leaked into the prod. It could be XSS, some kind of leak or data disclosure. It seems that this is a fantasy, but everything happens in life.

All this can be detected using the simplest web vulnerability scanners, so let's add these tools to our must-have.


Input Scanner

This web-based PHP framework allows you to detect input forms and JS libraries in your web application under test.

For example, you can download a list of URLs from Burp or ZAP proxy-history/logger+/etcand run through this tool, after adding payloads.

At the output, we get a list of URIs that we can fuzz with the intruder to find attack vectors.

https://github.com/zseano/InputScanner


Parameth

This utility will help you make GET and POST requests to a web application in order to find something hidden from ordinary spiders / crawlers that parse explicit links in the application under study.

https://github.com/maK-/parameth


XSStrike

Fierce tool to search for different XSS. Can detect DOM-based / reflected XSS , crawl a web application, fuzz parameters for WAF bypass, brute force from a payload file, detect hidden parameters and manipulate HEADER values .

XSStrike automates a lot of routine work. With proper tuning, excellent results are guaranteed.

https://github.com/s0md3v/XSStrike


Files, directories, options

Subdomains and IPs are collected - it's time to start exploring them. Here, all kinds of bruters are mainly used, which analyze the answers in order to understand whether the desired path or parameter exists.


gobuster

Gobuster is one of the most powerful and well-known tools for finding files and directories on websites. However, if all its abilities were limited to brute paths and comparing response codes, you yourself could type the same one in python in about five minutes, and Gobuster can also iterate over subdomains, virtual host names on the target web server, and also Amazon open storage S3.

Download Gobuster from GitHub


gospider

GoSpider is a feature rich web spider also written in Golang. The utility can parse robots.txt and sitemap.xml, search for subdomains in the response, and get links from the Internet Wayback Machine.

GoSpider also supports parallel crawling of several sites, which greatly speeds up the process of collecting information.

Download GoSpider from GitHub


search engines

I discovered my first bug precisely thanks to search engines. It happens that neither Gobuster nor GoSpider gives any results. But if the tools do not find anything, this does not mean that there really is nothing on the site.

In difficult cases, search engines often come to the rescue: just type site:site.com into them and the search robot will throw out the finished list. Many files and directories would never have been found if it weren't for search engines.

It is important to use multiple search engines at once ( yes, there are not only Google, but also Bing and Yahoo), because each one can show different results.

Let's look at the search results for iom.bus.att.com. There will only be two results on Google.

And now the same query in Bing.

As you can see, there are already nine results instead of two. Moral: don't forget that there are other search engines besides Google.


Arjun

Arjun can find hidden query parameters for given endpoints. Here are some features:

  • supports GET , POST , POST-JSON , POST-XML requests ;
  • exports results to Burp Suite, text or JSON files;
  • handles rate limits and timeouts automatically .

Download Arjun from GitHub


Internet Wayback Machine

The Wayback Machine is a massive archive of web pages with over 330 billion saved copies, all indexed for easy searching. The project saves historical versions, so you can go back many years and see how the site you are interested in looks like.

How can this be useful for us? For example, it can be interesting to look into old robots.txt files. It specifies the endpoints that search engines should not index, and this list changes over time.

The Wayback Machine is slowly archiving all this, and the old endpoints may well turn out to be working, so it would be a crime not to take advantage of this opportunity to get a list of obviously interesting locations from the site owners themselves!


Waybackrobots

Waybackrobots is a handy and very simple script that automates the process of getting older versions of robots.txt. It has only one required parameter: -d, it specifies the domain under which you need to dig.

Download Waybackrobots from GitHub


wbk.go

The Wayback Machine, among other things, has a list of all the URLs it has collected for a domain. For example, you can get a list of all the urls that the machine has archived for tesla.com.

The wbk.go script will automatically extract URLs archived from the Wayback Machine for the domain you need.

go run wbk.go tesla.com

Download wbk.go from GitHub


GitHub

GitHub is the industry standard for version control and project collaboration. Millions of developers make changes to the code on GitHub several times in one day, and they do not always look at what exactly they are uploading. It happens that they accidentally forget to delete credentials - logins, passwords and a variety of tokens.

You must have come across Google dorks more than once. GitHub also has its own dorks that you can use to find tasty data like API keys.


gdorklinks.sh

A simple script that generates GitHub search links with dorks. As a parameter, you must specify the name or website of the company. As a result, we get ready-made links that just need to be inserted into the browser and study the information received.

Download gdorklinks.sh from GitHub Gist


GirDorker

GitDorker does not just generate links, but immediately searches for information using the GitHub Search API and an extensive list of dorks, of which there are currently 513. This tool can be called a more advanced version of the previous script.

To work, you will need to have a GitHub Personal Access Token, and preferably at least two such tokens. This is due to the fact that the search API is limited to 30 requests per minute: if there is only one token, we will run into limits very quickly.

Download GirDorker from GitHub


Frameworks

When you do bugbounty or pentesting for a long time, reconnaissance begins to turn into a routine. Here you involuntarily think about automation. Below we will talk about frameworks that automate exploration almost completely.


Sudomy

Sudomy is, without exaggeration, a powerful Bash script. It includes many tools for analysis, enumeration, search for subdomains. Gathering information can be done passively or actively.

For the active method, the script uses gobuster due to its high brute force speed. When brute-forcing subdomains, a dictionary is used from SecLists (Discover/DNS), which contains about three million entries.

In the passive method, information is collected from 22 sources, including Censys, SpySe, DNSdumpster, and VirusTotal.

It would take another article to fully break down Sudomy, so I'll just say what it can do:

  • checks if it is possible to easily capture a subdomain;
  • identifies the technologies used by the site;
  • detects ports, urls, headers, content length, HTTP status code;
  • checks if the IP belongs to Cloudflare;
  • can send notifications to Slack;
  • scans ports from collected IP addresses, subdomains, virtual hosts.
Complete Sudomy Guide Created by the Developer (PDF)

For example, let's just run the script with the parameters --all(run all enums) and --html(generate a report in HTML).

Let's see what it finds, for example, for hackerone.com.

Almost all the information found can be viewed in the report generated by the script with a good structure and a very friendly interface.

Download Sudomy from GitHub


Reconftw

Reconftw is a very large script that automates literally everything from reconnaissance to finding vulnerabilities. It has incorporated the best tools used by bug hunters, including those described in the article.

Here is just a small part of what he can do:

  • search for a URL on the site;
  • collect information about subdomains;
  • look for open S3 buckets and dump their contents;
  • check for XSS , SSRF , CRLF , LFI , SQLi and other vulnerabilities;
  • check if there is a WAF on the site;
  • send notifications to Slack, Discord and Telegram;
  • look for URL parameters.

https://github.com/six2dez/reconftw


Combines to automate attacks

Finally, two uber-combines, which include a variety of tools for automated detection and attack of targets, ranging from DNS recon, scanning subdomains and ending with the identification of web vulnerabilities, bypassing WAF and brute force everything that is possible.

But I want to immediately warn that the thoughtless use of such monsters is like a monkey with a machine gun.


sn1per

Combine authored by the notorious 1N3 - the author of BruteX, BlackWidow, Findsploit and the collector of a huge number of payloads for the Intruder Burp Suite. Exists as Community Edition (free) and Pro version.

What can:

  • automatic basic collection of information (for example, whois, ping, DNS);
  • automatic launch of Google hacking requests against a given domain;
  • automatic enumeration of open ports;
  • automatic enumeration of subdomains and DNS information;
  • automatic launch of Nmap scripts on certain open ports;
  • automatic scanning of web applications for basic vulnerabilities;
  • automatic enumeration of all open services.

Sn1per downloads more than a gigabyte of additional utilities when installed on a regular distribution like Debian. And even in Kali, where many of the necessary things are already there, the installation takes a lot of time.

Sn1per can work in several modes - from OSSINT / RECON to carpet bombing targets in Airstrike / Nuke mode (like HailMary in Armitage).

https://github.com/1N3/Sn1per


TIDoS Framework

The TIDoS Framework is a comprehensive web application auditing framework. A very flexible framework where you just need to choose and use modules like Metasploit . It has a lot of modules on board - from reconnaissance and information gathering to exploiting web vulnerabilities.

TIDoS has five main phases, divided into 14 sub-phases, which, in turn, consist of 108 modules.

The tool is very sophisticated, requires tuning before use.

https://github.com/theInfectedDrake/TIDoS-Framework


Conclusion

Automation is good, but the head also needs to be turned on. The maximum return will be given by thoughtful use of utilities, a customized selection of dictionaries and fuzz lists for searching, as well as a systematic approach to finding bugs. And the result will not keep you waiting.

Naturally, not all tools are considered in the article. These, in my opinion, are the most suitable of those with which I work.

That's all for today. Have a good hunting!

Read more paid article for free : Here