Bash script to spider a site, follow links, and fetch urls -- with some filtering. A list of URLs will be generated and saved to a text file.
- Download the script and save to the desired location on your machine.
- You'll need
wget
installed on your machine in order to continue. To check if it's already installed (if you're on Linux or a Mac, chances are you already have it) open Git Bash, Terminal, etc. and run the command:$ wget
. If you receive an error message or command not found, you're probably on Windows. Here's the Windows installation instructions:- Download the lastest wget binary for windows from https://eternallybored.org/misc/wget/ (they are available as a zip with documentation, or just an exe. I'd recommend just the exe.)
- If you downloaded the zip, extract all (if windows built in zip utility gives an error, use 7-zip). If you downloaded the 64-bit version, rename the
wget64.exe
file towget.exe
- Move
wget.exe
toC:\Windows\System32\
- Open Git Bash, Terminal, etc. and run the
fetchurls.sh
script:$ bash /path/to/script/fetchurls.sh
- You will be prompted to enter the full URL (including HTTPS/HTTP protocol) of the site you would like to crawl:
# # Fetch a list of unique URLs for a domain. # # Enter the full URL ( http://example.com ) # URL:
- You will be prompted to enter the location (directory) of where you would like the generated results to be saved (defaults to Desktop on Windows):
# # Save file to location # Directory: /c/Users/username/Desktop
- You will then be prompted to change/accept the name of the outputted file (simply press enter to accept the default filename):
# # Save file as # Filename (no extension): example-com
- When complete, the script will show a message and the location of your outputted file:
# # Fetching URLs for example.com # # # Finished with 1 result! # # File Location: # /c/Users/username/Desktop/example-com.txt #
The script will crawl the site and compile a list of valid URLs into a text file that will be placed on your Desktop.
-
To change the default file output location, edit line #7 (or simply use the interactive prompt). Default:
~/Desktop
-
Ensure that you enter the correct protocol and subdomain for the URL or the outputted file may be empty or incomplete, however the script will attempt to follow the first HTTP redirect, if found. For example, entering the incorrect HTTP, protocol for https://adamdehaven.com will automatically fetch the URLs for the HTTPS version.
-
The script will successfully run as long as the target URL returns status
HTTP 200 OK
-
The script, by default, filters out the following file extensions:
.css
.js
.map
.xml
.png
.gif
.jpg
.JPG
.bmp
.txt
.pdf
-
The script filters out several common WordPress files and directories such as:
/wp-content/uploads/
/feed/
/category/
/tag/
/page/
/widgets.php/
/wp-json/
xmlrpc
-
To change or edit the regular expressions that filter out some pages, directories, and file types, you may edit lines #35 through #44. Caution: If you're not familiar with grep and regular expressions, you can easily break the script.