Web Scraping How To
Date last modified: Tue Apr 30 2024 9:48 AM
About This Page
This page is a record of my adventures with what is sometimes known as web scraping. Confusingly, this goes by many names such as web/form acquisition/capture, web/internet/data harvesting/mining, or screen scraping.
What it is about, as far as I am concerned, is extracting data from a website by repeatedly and automatically filling in a data request web page (usually some sort of form) and recording the information contained in the web page that comes back; for instance, recording the date and price of a flight so that a complete dataset can be built of all flights between certain destinations. Also, automatically sending certain information to a website to perform an action that is normally performed by manually entering data on a web page; an example would be sending an SMS message by automatically filling in a web form - the text to be sent might have been supplied from another application which can by this means send SMS messages automatically.
There are a number of software packages that claim to do this in a more-or-less automated way, and also a number of tools which assist a programmer in writing a script - see the links below. I've come up with my own approach which is discussed below.
Bespoke Web-Scraping: an Outline Example
In general, web scraping is quite a complicated process and will be performed by a script (in my case, written for bash) that is specific to the website that is being scraped. The reason for this is that websites usually perform this sort of data processing in unique ways and so any solution has to be tailor-made.
A bespoke solution is likely to use a number of utilities all of which are very powerful and none of which are particularly easy for the beginner. You will need to use a Linux command shell (if you only have a Windows computer then you can install Cygwin to give this), and to understand how regular expressions work (excellent information can be found here), and you will need to become familiar with each utility.
A common pattern might be:
- Run Wireshark with a suitable filter and then point your browser to the source website. Carry out a typical action to retrieve an example dataset: for example, fill in / select the dates and airports for an example flight and click on the 'submit' button to bring up the page of flight information. Now use Wireshark to analyze the data that was sent back by your browser to the website in order to request the dataset. Typically this will consist of a cookie and some 'POST' data; your script will need to do the same in order to generate return pages.
- Write a bash script. First, this retrieves (using curl) a 'starting' web page which might present the form which a user would normally complete manually. This page will often provide a cookie which is easily saved with curl.
- Begin a loop using curl to submit data back to the website in the specific format required (which you previously worked out by analyzing the output from Wireshark); this imitates the behaviour of a browser when the user has filled in the data on the first web page and then clicks 'submit'. Usually you use curl to send back the cookie that was previous saved, as this maintains the session state.
- The website responds with a web page and you extract from this the data that you want - let us say the times, flight numbers and prices, and save them in a database (or a text file). Extraction is usually done with one or more of grep, awk and sed; you can make it easier with my form-extractor.
- Now loop again for (say) the next date, until you reach the last (say) date.
Most scraping exercises will follow this general pattern but whether from your specific requirements or because of the idiosyncracies of the source website, they are likely to have some unique aspects. Varying date formats, extraneous required fields ('sid' is a common one), and more complex looping parameters (including nested loops where you want to vary more than one parameter) are a few of the complications that arise.
Free Webpage Analysis Tools
- Screen-Scraper - a complete web-scraping GUI tool which has a free Basic version.
- Wireshark - an open source network protocol analyzer which runs under Windows Linux and OS X. Very helpful for finding out what information your browser is sending back to a web page when it is looking up information, so that you can then imitate this behaviour with curl. After capturing data using filter 'tcp port http', go to 'Analyze' / 'Follow TCP Stream'.
- Outwit Hub - an add-on for Firefox which can perform data capture and some manipulation/reformatting thereof, but only for incoming data.
- Scrapy - a scraping and web crawling framework written in Python - currently (August 2009) under very active development.
Free Command Line (CLI) Data Submission/Extraction Tools
- curl (for Linux or Cygwin) - can retrieve web page and save cookies and can also send back data to websites. A less powerful alternative which can be sufficient in some situations is wget (a comparison page betwen the two can be found here.)
- grep (for Linux or Cygwin) - very powerful tool for extracting data from files (i.e. downloaded web pages) based upon regular expressions.
- awk (for Linux or Cygwin) - another powerful tool for extracting or reformatting data, it reads the source data as a series of records broken up into fields (the standard record separator being a newline and the standard field separator being whitespace, but both of these can be varied).
- sed (for Linux or Cygwin) - another powerful tool for manipulating and extracting data, especially with the 's' subcommand.
- bash (for Linux or Cygwin) - a Linux command line shell and scripting language.
- form-extractor - extracts form tags from html.
Non-Free Web Scrapers
- WebHarvy - WebHarvy can automatically scrape data (text & images) from web pages and save the scraped content in different formats. Single user license $99 (at January 2012).
- Screen-Scraper - a 'complete' web-scraping GUI tool which comes in a free Basic version as well as pay-for Professional ($549) and Enterprise ($2799) flavours (prices at January 2012).
- Helium Scraper - a commercial but inexpensive product ($80 single-user - January 2012) with free 10-day trial. Has a neat GUI for setting up and then extracting data.
- Automation Anywhere - a generic automation package but among its uses is web data extraction. Free trial version does not allow information to be saved, full program costs $695 (January 2012). They claim 'unparalleled service support - an extended team at your disposal'.
- Mozenda - this is probably the most ambitious attempt to create a user-friendly GUI web-scraping tool i.e. one that can be used by non-techies. You pay for a periodic license and then if you use more than the number of pages included for this period you pay additional fees - so it could get very expensive in practice. It is currently (January 2012) $99 per month including 5000 page downloads (there are other options too). You can sign up for a fully-operational free 14 day trial.
More Information About Web-Scraping
- Web scraping tutorial - introduction to web-scraping using php and the simplehtmldom library.
- The Data Mine - information about data mining generally (only some of which relates to web scraping)
Donation
I have provided this software free gratis and for nothing. If you would like to thank me with a contribution, please let me know and I will send you a link. Thank you!
My Other Sites
- TimeDicer - Onsite/offsite data backup for Windows (uses rdiff-backup)
- Finding a 4D Backup Solution
My Programs
Here is a selection of some (other) programs I have written, most of which run under GNU/Linux from the command line (CLI), are freely available and can be obtained by clicking on the links. Dependencies are shown and while in most cases written and tested on an x86-based Linux server, they should run on a Raspberry Pi, and many can run under Windows using Windows Subsystem for Linux (WSL) or Cygwin. Email me if you have problems or questions, or if you think I could help with a programming requirement.
Backup Utilities
- TimeDicer - Onsite/offsite data backup for Windows (uses rdiff-backup) [ GNU/Linux & MS Windows©: 2008-20 ]
- rdiff-backup-regress - GNU/Linux script to regress an rdiff-backup archive. [ GNU/Linux: 2012-24 ]
Debian/Ubuntu kernel and LVM Utilities
- kernel-remove - GNU/Linux script to list the installed GNU/Linux kernels in a Debian-based distro (e.g. Ubuntu), and can be used to remove an unwanted kernel and related packages, updating grub appropriately. [ GNU/Linux-Debian/Ubuntu: 2010-24 ]
- lvm-usage - GNU/Linux script to show available disk space and how it is used; run as cron job to warn if usage is above a set percentage. Provides additional information if LVM is in use. [ GNU/Linux: 2012-24 ]
- lvm-delete-snapshot - GNU/Linux script to remove LVM snapshot that has been left over by another process. [ GNU/Linux: 2012-21 ]
- netnames - GNU/Linux script shows current name, biosdevname and 'predictable name' of network device - helps with network device name scheme migration. [ GNU/Linux-Debian/Ubuntu: 2020-20 ]
- lv-convert2cache - GNU/Linux script to convert an existing LV into a cache LV using a smaller faster device as a cache. [ GNU/Linux: 2022-23 ]
Miscellaneous Programs
- sleepwalker - Windows© program which can be run from a remote machine to 'wake up' a Windows© machine behind a router, wait for it to start and then initiate Remote Desktop session. [MS Windows©: 2008-22]
- numliststat - GNU/Linux program giving statistical value(s) for a piped-in list of numbers. [ GNU/Linux: 2022-24 ]
- relay-enforcer - GNU/Linux program enabling a postfix-based mail server relaying to Gmail to act on reports from Gmail about blocked emails. [ GNU/Linux: 2016-24 ]
- pdf-compress - GNU/Linux program to create smaller b/w pdf file from an original large pdf file, especially when original resulted from scanning. [ GNU/Linux: 2016-23 ]
- tiny-device-monitor - GNU/Linux program to test webpages (including password-protected) or machines to check they are live; use as a cron job for your own websites, for hardware presenting a webpage, or for any machines with a presence on your local LAN or on the internet. [ GNU/Linux: 2009-24 ]
- form-extractor - GNU/Linux program to extract form tags from a web page or downloaded file. [ GNU/Linux: 2012-20 ]
- mythic-dns-sync - GNU/Linux program to update DNS record at mythic-beasts.com to match local external ip. [ GNU/Linux: 2016-23 ]
- saynoto0870 - For UK, a GNU/Linux script which performs automated lookup of the www.saynoto0870.com database, finding cheap or free geographic number replacements for expensive non-geographic (087* or 084*) numbers. [ GNU/Linux: 2012-12 ]
- bind9-resolved-switch - GNU/Linux program for switching permanently between using bind9 or systemd-resolved as the system DNS resolver. [ GNU/Linux: 2016-24 ]
- unlock - GNU/Linux remote program for easy entering of decrypt passphrase on a remote machine which has root dm-crypt+LUKS. [ GNU/Linux: 2017-18 ]
- wifi-updown - GNU/Linux program to take down wifi interface if there is a working wired interface (or restore wifi if not). [ GNU/Linux: 2018-23 ]
- routefix - GNU/Linux program to restore a default ip traffic route if there is none such (e.g. after running wifi-updown). [ GNU/Linux: 2018-23 ]
- dutree - GNU/Linux program to show a tree-style list of files and directories at the specified location which are greater than the specified size (default 1GB). [ GNU/Linux: 2012-24 ]
- Accounts - Multi-business multi-currency accounting software, uses Access [MS Windows©: 1996-2024]
- Rents Program - Residential lettings/landlord front office program, with many special features for UK market [MS Windows©: 1991-2024]
This section is closed. If you have a question, please submit it by email, thank you.
If you have any questions or comments, please email Dominic dominic@timedicer.co.uk.