Bug bounties have become a popular way for organizations to identify and fix vulnerabilities in their systems. These programs offer financial rewards to researchers and hackers who can find and report security issues. However, managing a bug bounty program can be time-consuming and resource-intensive. In this article, we will look at how to automate your bug bounties to make the process more efficient.
- Use a bug bounty platform
One way to automate your bug bounties is to use a bug bounty platform. These platforms provide a centralized location for researchers to report vulnerabilities, and they often have tools to help you manage and triage submissions. Some platforms also offer features such as automated payment processing and reporting.
- Set up automated testing
Automated testing can help you identify vulnerabilities before they are exploited. There are a number of tools available that can perform automated testing of your systems, including web application scanners, network scanners, and configuration checkers. These tools can be run on a regular basis to ensure that your systems are secure.
- Use automation to triage submissions
Once a researcher has submitted a vulnerability report, it needs to be triaged to determine its severity and whether it needs to be fixed. This can be a time-consuming process, especially if you receive a large number of submissions. By automating this process, you can save time and ensure that submissions are promptly handled.
- Set up automated notifications
Automated notifications can help you keep track of new submissions and ensure that they are promptly addressed. You can use tools such as Slack or email to set up notifications that are triggered when a new submission is received.
- Use machine learning to prioritize submissions
Machine learning algorithms can be used to analyze submission data and identify patterns that may indicate high-priority vulnerabilities. By prioritizing submissions in this way, you can focus your efforts on the most pressing issues.
In conclusion, automating your bug bounties can save time and resources, and help you more efficiently identify and fix vulnerabilities in your systems. By using a bug bounty platform, setting up automated testing and notifications, and using machine learning to prioritize submissions, you can streamline your bug bounty program and ensure that your systems are secure.
Build a bug bounty automation framework
In this article, I am going to walk you through every attempt I have made to build a bug bounty automation framework including the wins and failures. Then I’m going to tell you exactly how I plan to build my next one.
Attempt #1: Bash
This is how a lot of my tools start. I come up with an idea that sounds fairly simple in theory, drastically underestimate the amount of work involved, and then try to make it happen in a bash script. My bug bounty automation project was no different.
The result ended up being a 100 line bash file that followed a pattern like this.
#!/bin/bash # Get subdomains and check for HTTP responses cat rootdomains.txt | subfinder | tee subdomains.txt | httpx | tee httpx.txt # Brute force all subdomains cat httpx.txt | while read url; do dirsearch.py -e php,aspx,asp,txt,bak -u $url | tee ./bruteforce/$url-dirsearch.txt; done # Check all domains for open git repositories cat httpx.txt | while read url; do if curl $url/.git/config | grep -q "\[core\]" then echo "Open .git repository at $url" | ./notify.sh fi done # Do another thing cat httpx.txt | while read url; do # do another thing done Etc…
I ran the script overnight and struggled to sleep. I was excited to wake up and see the results! I had high hopes that I would be retiring in a week, maybe two, due to the piles of cash bounties that would be waiting for me in my vulns.txt file the next morning. Once the bounty payments came in I could finally spend all of my waking hours eating ramen and binging Antique Roadshow with my cats.
In reality I woke up the following morning to find that my bash script had errored out somewhere along the line and found zero bugs. I quickly realized that this approach had a couple of systemic issues.
- It was too slow
- Storing data in text files is not ideal
- Any failure in any tool killed the whole script
Back to the drawing board…
Attempt #2: Django
To solve the issues I faced with the bash script, I decided to start again using a proper framework. At this stage, the framework that I was most comfortable with was Laravel PHP, but I ended up going with Django because most of the hacking tools that I was using at that time were written in Python.
Data Storage
The first problem to solve was how to structure the data storage. I ended up storing everything in a relational database (PostgreSQL), utilizing the Django ORM. I set up relationships between the objects to mimic how they work in real life. It looked something like this:
With the relationships set up this way, it would be easy to query the database to answer questions such as:
- Which program does this URL belong to?
- Which subdomains have port 3389 open?
- What vulnerabilities have been discovered on sub.domain.com?
By default, Django’s ORM also stores the created and modified dates for each instance that is persisted to the database, so you can ask time-based questions such as:
- What new subdomains have been discovered in the last hour?
- What new open ports have been discovered this week?
Code Structure
Django is typically considered to be a web framework. My original intention was to build out a frontend web application to manage the automation tasks, but I quickly realized that Django actually works very well as a modular framework for command-line applications by utilizing Django’s custom management commands. Instead of writing a whole web frontend, I ended up writing each functionality of the automation into custom management commands so that it could be easily executed from the command line, scheduled with cron, wrapped into a bash script, etc.
For example, I wrote a management command called “subfinder”, which would pull all of the root domains out of the database, run project discovery’s “subfinder” against them, and store the resulting subdomains back into the database. To run it I could just type something like this into the terminal:
# run against all root domains in the database django-admin subfinder # run against all of Tesla's domains from the database django-admin subfinder -program tesla # run against example.com django-admin subfinder -rootdomain example.com
I wrote a couple of basic modules in this way, but quickly realized that the real value would be developing a huge set of modules to discover many different types of vulnerabilities. Keep in mind that this was years before Project Discovery’s Nuclei tool was released, so at this time there were not many people bug hunting at scale, and there was no awesome open-source central repository of vulnerability detection signatures that I knew about.
This was when I started collaborating with codingo_ and sml555_ to build out a stack of vulnerability detection modules. Finally we started seeing the bounties rolling in!
But we had hit another issue… It was slow.
Speed and Scaling
The problem was that we were performing multiple scans against millions of subdomains. Let’s say that for each subdomain, I want to run some scans that take a total of 1 minute. This would be very minimal scanning on each host, but it would still take 1 minute per host. At 2 million subdomains, it would take nearly 4 years to complete. We really wanted to bring this down to under 1 hour.
The first thing we tried was using multiple threads. For example, instead of just running this command, which would run subfinder against every root domain in consecutive order:
django-admin subfinder
We could use something like GNU parallel which would allow us to run subfinder against multiple domains concurrently. For example:
cat root-domains.txt | parallel -j 20 "django-admin subfinder -rootdomain "
This is also where the idea for Interlace was born. Multithreading in this manner drastically reduced the amount of time taken to complete tasks, but it created a new problem. Now that we were running 20 instances of subfinder concurrently, the VPS we were using was running out of RAM and CPU. We could solve this temporarily by using a more powerful VPS (vertical scaling), but even an extremely powerful VPS was not powerful enough to perform all of the tasks that we needed within the timeframe that we were expecting.
This is when we had the idea to spin up lots of low-powered instances to perform tasks, and they’d all report their results back to a central data source. This is called horizontal scaling, although I didn’t know that at the time.
In our first attempt to implement this, we (naively) created a “Job” object that sat in a PostgreSQL database. We then created a worker client that would continuously retrieve a job, execute the job, then return the output back to the database. As soon as we kicked it off, we realised that there were race conditions everywhere. Most of the workers ended up executing the same jobs at the same time, and every job was being executed 10+ times. To counteract this, we tried to introduce database locking, but this made the process so slow that we might as well have gone back to our vertical scaling method.
In my frustration, I ended up joining a Django community online and explaining the dilemma to a group of total strangers. One of them introduced me to the concept of queues and message brokers, and said that RabbitMQ might solve our problems.
I implemented the same queue concept utilising RabbitMQ instead of PostgreSQL, and it worked! We scaled up to 100 workers and suddenly we were able to perform recon and vulnerability scanning of all bug bounty assets in a fraction of the time. Together, we found a lot of bugs this way because we were among the first to implement bug bounty hunting at scale. Even scanning for low-hanging fruit was profitable because we were always one of the first parties to discover when hosts fell into a vulnerable state. All of the workers were $5 VPSs and we had a few more powerful servers for the core functionalities. The infrastructure ended up looking something like this.
Our system wasn’t perfect though:
- The code base was monolithic and time consuming to set up
- The tool did not integrate well with other tools without developing custom modules, which was time consuming
- It was difficult to detect and monitor when workers were failing to complete jobs
Ultimately, the three of us all ended up working at Bugcrowd, so our bug hunting and automation took a back seat, and we ended up pulling it offline when we had no time to focus on it anymore.
Attempt #3: Golang and the Unix Philosophy
By this point, automating the discovery of low-hanging fruit had become a very common tactic among bug bounty hunters. Suddenly every man and his dog had their own bug bounty automation. Due to this, my focus moved to either manual hacking or researching popular services to uncover misconfigurations that might result in widespread vulnerabilities. In the latter, a good scalable system was still necessary for discovering all hosts running a particular technology. For example, say you find a common misconfiguration in a particular WordPress plugin – having good automation was very useful because:
- I already had a dynamic, up-to-date list of in-scope targets on hand at all times in the database
- It was trivial to develop a new module to detect the presence of the WordPress plugin on all targets
Scanning for low-hanging fruit was no longer very profitable for me, but I still found automation extremely useful for testing custom payloads and maintaining an up-to-date list of targets including open ports, utilized technologies, etc. The problem was, the existing Django setup was quite clunky, especially when I wanted to do something custom and quickly. Rather than having a bunch of separate tools to quickly perform custom tasks, I had one gigantic tool that did everything, and didn’t integrate nicely with other tools.
I found myself often reverting back to using manual tools that followed the Unix philosophy more closely like httpx and nuclei. This made it easy to build a custom workflow by simply piping tools into one another. Unfortunately, hacking in this way meant that I lost the ability to organize my data into a relational database and scale out over multiple hosts.
This is when I started forming a plan for a new bug bounty automation setup. I wanted to combine the power of the Unix philosophy with the power of horizontal scaling and relational databases. The new system would consist of four completely independent parts. Each part could be used on it’s own, or as part of the larger system by chaining tools together.
- A system to store and retrieve data from the database
- A system to manage scaling out jobs
- A system to scan for vulnerabilities
- A system to send notifications when something is discovered
Thankfully, good solutions for number 3 and 4 already existed (Project Discovery’s Nuclei and Notify). I just needed to build out a custom solution for 1 and 2 – so that’s what I did!
Hakstore
First I built “Hakstore”, a REST API server and corresponding client in Golang to interact with the database. This allows you to quickly create/edit/delete data from the relational database. For example, this command would list all subdomains associated with the Tesla program.
hakstore subdomains list -program tesla
This command would take all subdomains from a file and add them to the database, and associate them with the root domain “tesla.com”.
hakstore subdomains import -f subs.txt -rootdomain tesla.com
This command would delete a program from the database
hakstore programs delete -id tesla
Hakscale
Next I built out a command line tool designed to scale out shell commands to many systems, I called it “hakscale”. If you’re familiar with Interlace, it basically works the same as that except it distributes the command over multiple systems instead of multiple threads on one system. I wrote it in Golang and used Redis as the message broker to distribute the commands.
Executing the following command would get a list of URLs from urls.txt
, then create a command for each URL, replacing the _url_
placeholder with the actual URL. All of these commands are then sent to a queue on the Redis server.
hakscale push -p "url:./urls.txt" -c "echo _url_ | nuclei -nc --config /opt/config.yml" -t 600
Then, we have a bunch of “worker” VPSs each running the following command:
hakscale pop -t 20
This command will create 20 threads, each thread constantly pulls jobs from the “jobs” Redis queue, executes them, and then sends the results back to a “results” queue. This results queue is monitored by the original VPS that pushed the job – when it sees a result in the queue, it will print it to the terminal. For the visual learners, here’s a diagram:
The coolest thing about this tool is that it feels like you are executing commands on your local machine, but you’re actually executing those commands on hundreds of workers which makes it extremely fast.
Putting the Components Together
Now that I have decoupled each component of the automation, I have a lot more flexibility. Rather than being confined to the functionality of the custom Django management commands, I can now generate literally any set bash commands in seconds, and they will be executed at scale.
Still though, it isn’t a perfect setup – in fact I recently pulled all of the infrastructure offline.
- I had 100 worker VPSs running 24/7, costing me $500+ per month even though I only used them for a fraction of that time.
- It was difficult to tell when commands were failing.
- If anything went wrong (for example, if one result was never pushed back to the queue due to a temporary network failure) I was never alerted, and I had no way of troubleshooting what happened.
Attempt #4: Cloud Native
Recently, I got my AWS Cloud Practitioner and Solutions Architect Associate certifications. While I was studying for the exams, I learned about a lot of AWS offerings that I wish I had known about sooner. It would have saved me so much time and frustration. If there are any solutions architects reading this, I bet you have been cringing constantly for the last ten minutes, reading about my dodgy solutions.
Now I feel as though I have a decent grasp on how to set up a much better automation system utilizing AWS. I haven’t actually set this automation system up yet, but I plan to do it as a weekend project sometime soon, and I’ve got it planned out in my head. Here’s my game plan.
- Workers are built as Docker containers, managed by AWS Fargate and AWS Elastic Container Service (ECS).
- Workers scale automatically based on the number of items in the job queue. This will ensure that a large amount of workers are available when I need them, but they all die when I don’t need them anymore. This could potentially be set up very quickly by utilizing an Elastic Beanstalk worker environment.
- EFS will be used to share files to all containers (config files, tools, etc.)
- Amazon Simple Queue Service (SQS) is used to manage jobs. A request-response system is used to send jobs and receive results. A dead letter queue (DLQ) is utilized to discover jobs that are failing.
- Amazon Aurora is used to store relational data.
My hakstore tool should already play nicely with Amazon Aurora since it is PostgreSQL compatible. Now I just need to modify hakscale to work with SQS instead of Redis which shouldn’t be too hard.
Once I set this up, I think that I finally will have hit my peak in bug bounty automation. It will be cost effective, flexible, reliable, and performant.
I’ll let you know how it goes in a future article, wish me luck!
Conclusion
If you are thinking about building your own bug bounty automation, I would highly recommend first educating yourself with what technologies are already available either as open source projects or cloud services. This will save you a lot of money along with months of crippling frustration and imposter syndrome.
Also – planning ahead (both infrastructure and code) is important. When you’re working on a personal project, it’s easy to just jump in and start coding, but more often than not this results in roadblocks and unmaintainable code.
Lastly, I’m always keen to chat about bug bounty automation. If you have any questions feel free to hit me up on Twitter, Instagram or on my facebook.