menuimageTimeDicer

About_This_Page

An_Example

Free_Analysis_Tools

Free_CLI_Tools

Non-Free_Scrapers

Links

Donation

My_Other_Sites

My_Programs

Comments

Web Scraping How To Valid XHTML 1.0 Transitional

Date last modified: Tue Apr 30 2024 9:48 AM

About This Page

This page is a record of my adventures with what is sometimes known as web scraping. Confusingly, this goes by many names such as web/form acquisition/capture, web/internet/data harvesting/mining, or screen scraping.

What it is about, as far as I am concerned, is extracting data from a website by repeatedly and automatically filling in a data request web page (usually some sort of form) and recording the information contained in the web page that comes back; for instance, recording the date and price of a flight so that a complete dataset can be built of all flights between certain destinations. Also, automatically sending certain information to a website to perform an action that is normally performed by manually entering data on a web page; an example would be sending an SMS message by automatically filling in a web form - the text to be sent might have been supplied from another application which can by this means send SMS messages automatically.

There are a number of software packages that claim to do this in a more-or-less automated way, and also a number of tools which assist a programmer in writing a script - see the links below. I've come up with my own approach which is discussed below.

Bespoke Web-Scraping: an Outline Example

In general, web scraping is quite a complicated process and will be performed by a script (in my case, written for bash) that is specific to the website that is being scraped. The reason for this is that websites usually perform this sort of data processing in unique ways and so any solution has to be tailor-made.

A bespoke solution is likely to use a number of utilities all of which are very powerful and none of which are particularly easy for the beginner. You will need to use a Linux command shell (if you only have a Windows computer then you can install Cygwin to give this), and to understand how regular expressions work (excellent information can be found here), and you will need to become familiar with each utility.

A common pattern might be:

  1. Run Wireshark with a suitable filter and then point your browser to the source website. Carry out a typical action to retrieve an example dataset: for example, fill in / select the dates and airports for an example flight and click on the 'submit' button to bring up the page of flight information. Now use Wireshark to analyze the data that was sent back by your browser to the website in order to request the dataset. Typically this will consist of a cookie and some 'POST' data; your script will need to do the same in order to generate return pages.
  2. Write a bash script. First, this retrieves (using curl) a 'starting' web page which might present the form which a user would normally complete manually. This page will often provide a cookie which is easily saved with curl.
  3. Begin a loop using curl to submit data back to the website in the specific format required (which you previously worked out by analyzing the output from Wireshark); this imitates the behaviour of a browser when the user has filled in the data on the first web page and then clicks 'submit'. Usually you use curl to send back the cookie that was previous saved, as this maintains the session state.
  4. The website responds with a web page and you extract from this the data that you want - let us say the times, flight numbers and prices, and save them in a database (or a text file). Extraction is usually done with one or more of grep, awk and sed; you can make it easier with my form-extractor.
  5. Now loop again for (say) the next date, until you reach the last (say) date.

Most scraping exercises will follow this general pattern but whether from your specific requirements or because of the idiosyncracies of the source website, they are likely to have some unique aspects. Varying date formats, extraneous required fields ('sid' is a common one), and more complex looping parameters (including nested loops where you want to vary more than one parameter) are a few of the complications that arise.

Free Webpage Analysis Tools

Free Command Line (CLI) Data Submission/Extraction Tools

Non-Free Web Scrapers

More Information About Web-Scraping

Donation

I have provided this software free gratis and for nothing. If you would like to thank me with a contribution, please let me know and I will send you a link. Thank you!

My Other Sites

My Programs

Here is a selection of some (other) programs I have written, most of which run under GNU/Linux from the command line (CLI), are freely available and can be obtained by clicking on the links. Dependencies are shown and while in most cases written and tested on an x86-based Linux server, they should run on a Raspberry Pi, and many can run under Windows using Windows Subsystem for Linux (WSL) or Cygwin. Email me if you have problems or questions, or if you think I could help with a programming requirement.

Backup Utilities

Debian/Ubuntu kernel and LVM Utilities

Miscellaneous Programs

Comments

This section is closed. If you have a question, please submit it by email, thank you.

No comments yet for '/web-scraping.php'

If you have any questions or comments, please email Dominic dominic@timedicer.co.uk.