🎯 WEEKLY BRIEF
Hope you all had a nice weekend. ᕕ( ᐛ )ᕗ
Today I am going to introduce six new programs for YOU to hack! Up to $250,000!
Three upcoming CTF’s for this weekend. Good prizes, great practice.
For my recon buddies, we are breaking down GoSpider.
More surface area, more chances to get paid. 🤑
Lets start hacking.
🚀 TOP PROGRAMS TO HACK THIS WEEK
Here is a list of the top six programs to hack this week! ¯\_(ツ)_/¯
📅 Upcoming CTFs
Name | Date | Prizes |
|---|---|---|
2/27 - 2/28 | TBD: | |
2/27 - 3/01 | 1st prize 250$ | |
2/28 - 3/01 | 1st 150,000 JPY |
🕷 GoSpider
(if you have arachnophobia don’t worry, there are no spiders involved.)
WHAT IS GoSpider?
Go spider is a fast, lightweight web crawler written in Go, that is built specifically for reconnaissance and security testing. It automatically can discover URLs, endpoints, parameters, and hidden paths inside a target website.
Originally developed by the Jaeles Project, GoSpider is popular in bug bounty hunting and pen testing because it can extract links from :
HTML pages
JS files
sitemap.xml
robots.txt
Inline scripts
External JS sources.
GoSpider can map out a site for you, helping you uncover hidden attack surfaces like API routes, admin panels, and endpoints. ◕‿↼
in caveman language:
GoSpider equals link discover recon
How Does It Work?
GoSpider works by sending HTTP requests to a target website and analyzing the responses to extract new URLS.
Installation
Make sure you have Go installed, in order to install GoSpider.
sudo snap install go --classicGO111MODULE=on go install github.com/jaeles-project/gospider@latestMake sure your Go binary path is in your system PATH:
export PATH=$PATH:$(go env GOPATH)/binUsing GoSpider ~( ˘▾˘~)
The simplest little cutest possible command
crawl a target URL and print all discovered links:
gospider -s https://google.com How to save output to a file
gospider -s https://google.com -o output/Get multiple targets on one file.
Create a file with one URL per line (ex: targets.txt), then do:
gospider -S targets.txt -o results/GoSpider flags
Here are the most useful flags for security work, when using GoSpider:
Flag | Description |
|---|---|
-s URL | Single target URL to crawl |
-S file.txt | File containing multiple target URLs |
-o directory | Output directory for results |
-t N | Number of concurrent threads (default: 5) |
-d N | Crawl depth (default: 1) |
--js | Enable parsing of JavaScript files for URLs |
--sitemap | Parse sitemap.xml for additional URLs |
--robots | Parse robots.txt to find disallowed paths |
-c N | Concurrent requests per host (default: 5) |
-p proxy | Use a proxy (e.g., http://127.0.0.1:8080) |
--blacklist regex | Exclude URLs matching this regex pattern |
--include regex | Only crawl URLs matching this pattern |
-H 'Header' | Add a custom HTTP header (repeatable) |
--cookie 'k=v' | Send a cookie with every request |
--timeout N | HTTP timeout in seconds (default: 10) |
--no-redirect | Do not follow HTTP redirects |
-q | Quiet mode — suppress banner and verbose output |
--json | Output results in JSON format |
Deep Crawl with JavaScript Parsing
Increase crawling depth and extract URLs hidden in JS files.
gospider -s https://google.com -d 3 --js -t 10Full Recon Mode
Enable sitemap and robots.txt parsing alongside JS extraction for max coverage.
gospider -s https://google.com --js --sitemap --robots -d 3 -o recon/Route Traffic Through Burp Suite
Proxy all GoSpider traffic through Burp Suite to capture and analyze requests in real time.
gospider -s https://google.com -p http://127.0.0.1:8080 --js -d 2Filter for Specific Endpoints
Use the include flag to focus only on API endpoints.
gospider -s https://google.com "/api/" --js -d 3Exclude Static Assets / Reduce Noise
Filter out images, fonts, and stylesheets so results stay SQUEAKY clean. 🧼
gospider -s https://google.com --blacklist
"\.(jpg|jpeg|png|gif|css|woff|svg|ico)$"JSON Output for Scripted Workflows
Use JSON output when integrating GoSpider into automated pipelines:
gospider -s https://google.com --json -q | jq '.output'💡 Tips for GoSpider
Scope Management:
Define your scope clearly before crawling. Use —include to restrict GoSpider to in scope domains/paths. Crawling out of scope systems can have LEGAL CONSEQUENCES.
Performance Tuning
For big ol targets, increase threads with -t 20, and concurrency with -c 10. Watch for rate limiting, if the server starts returning 429s, DIAL it back. A proxy like Burp can help monitor this in real time.
Chain w/ Waybackurls
Combine GoSpider with waybackurls to get both live and historical URL coverage, EX :
gospider -s https://google.com -q | tee live_urls.txtwaybackurls google.com >> live_urls.txtcat live_urls.txt | sort -u | httpx -silent