Blog

Web Scraping with Python: Step-By-Step (2022)

Web Scraping has become one of the most powerful techniques in the data science community for collecting data from the internet. Often we rely on datasets from someone else, but it is important to equip yourself with the requisite skills to create your own custom datasets. That is why Fortune 500 companies like Amazon, CNN, Target, and Walmart use web scraping to get ahead and stay ahead with data. It is an indispensable growth tool and one of their best-kept secrets, and it can easily be yours too.

Hire the best developers in Latin America. Get a free quote today!

Contact Us Today!

In this article, you will learn how to scrape data with Python. At the end of this post, you will understand the most important components of web scraping to give you the skills to build your own web scraper.

So whether you’re a data scientist or a machine learning engineer looking to create new datasets a web developer with a general interest in automating tasks, this article delivers an in-depth presentation of web scraping basics and approaches that you can easily apply in the real business world or in your personal projects.

What is Web Scraping?

Web scraping, also called web data extraction, is an automated process of collecting publicly available web data from targeted websites. Instead of gathering data manually, web scraping software can be used to acquire a vast amount of information automatically, making the process much faster.

Why is Web Scraping Important?

Some websites can contain a very large amount of invaluable data such as stock prices, product details, sports stats, you name it. If you want to access this information, you either have to use whatever format the website uses or copy and paste the information manually into a new document. This can be pretty tedious when you want to extract a lot of information from a website and here is where web scraping can help. Instead of scraping this data manually, in most cases, software tools called web scrapers are preferred because they are less expensive compared to human labor and they work at a faster rate. Web scrapers can run on your PC or in a data center.

How Does Web Scraping Work?

The web scraping process involves three main steps:

Step 1: Retrieving content from a website

You retrieve content from the targeted website using a web scraping software also called a web scraper that makes HTTP requests to the specific URLs. Depending on your goals, experience and budget, you can either buy a web scraping service or build your own web scraper. The web scraper is issued with one or more URLs to scrape. The scraper then loads the entire HTML code for the pages requested. More advanced scrapers will render the entire web page, including CSS and JS elements.

Step 2: Extracting the required data

The specific information you need from the HTML is parsed by the web scraper according to your requirements.

Step 3: Store parsed data

The final step is storing parsed data. The data is typically stored in CSV or JSON formats for further use.

Web Scraping Use Cases

Businesses use web scraping for various purposes, such as; market research, brand protection, price monitoring, SEO monitoring, travel fare aggregation, review monitoring, etc. Let’s have a look at some of the use cases in more detail:

  1. Ticket Bots – Web scrapers are increasingly used to carry out tasks related to ticketing, such as scraping pricing details, checking inventory for newly released seats, and purchasing tickets. For example, ticket brokers apparently used ticket bots on Ticketmaster to purchase most of the tickets for Taylor Swift’s “Midnight” album tour.
  2. Competitive Price Monitoring – Since businesses need to keep up with the ever changing prices in the market, scraping product prices from sites like Amazon or eBay is vital to making accurate pricing strategies. For example, you can use web scraping to export a list of product names and prices from Amazon onto an Excel spreadsheet and conduct competitor analysis.
  3. Market Research – Web scraping is broadly used for market research. To stay competitive, companies need to know their market and analyze competitor’s data.
  4. SEO Monitoring – Web scraping allows companies to conduct SEO audits for tracking their results and progress in search engine rankings.
  5. Review Monitoring – Web scraping can also be used to track customer reviews and achieve marketing goals.
  6. Generating Leads – Scraping data from LinkedIn, Yellow Pages or YELP to generate leads.
  7. Travel Fare Aggregation – Travel companies use web scrapers to search for deals across multiple websites and publish the aggregated results on their websites.
  8. Scraping Stock Prices – to make better investment decisions.
  9. Scraping Sports Stats – for betting or fantasy leagues.

Web Scraping Tools

Now that you know the basics of web scraping, you’re probably wondering, what is the best web scraper for you? The obvious answer is that it depends! It’s way easier to know which web scraper is best for you the more you know about your web scraping needs. Nowadays, websites can come in many shapes and formats, and as a result, web scrapers can vary in functionality and features. For example, web scrapers can come as a browser extension or a more powerful desktop application that can be downloaded on your computer to scrape sites locally using your computer resources and your internet connection or deployed on the cloud.

Therefore, as you can see, the market provides a range of automated web scrapers. Among the most commonly used scaping tools are Octoparse and ParseHub. These apps can automate data extraction from multiple online sources as long as you know what type of content you’re looking for.

Is Web Scraping Legal?

With web scraping gaining more popularity, more questions regarding its legality are starting to come up. Even though web scraping isn’t illegal by itself, and there are no clear laws or regulations to address its application, it’s important to comply with all other laws and regulations regarding the source targets and the data itself. Generally, scraping publicly available data, or anything that you can see without logging into the website, is legal according to a U.S. appeals court ruling as long as your scraping activities do not harm the scraped website’s operations.

Here are some examples of web scraping possibly being illegal that you should consider:

  1. Scraping intellectual property – You have to ensure that you are not breaching laws that may be applicable to copyrighted data such as designs, articles, videos, and everything that can be considered as creative work. Scraping copyrighted data for personal purposes is usually OK, as it could fall under the fair use provision of the intellectual property legislation. However, sharing data for which you don’t hold the right to is illegal.
  2. Scraping personal data – It is important not to collect personally identifiable information (PII) because most people don’t like having their personal information collected without their knowledge or consent. So, after scraping, double-check your output for data that would go against GDPR or CCPA.
  3. Scraping confidential data – Scraping private content without permission can also get you in trouble.

How to Build a Python Web Scraper?

Feeling adventurous? Just like how anyone can build a website, you can also build your own web scraper if you wanted to after some coding of course. In the subsequent section, you’re going to learn how to create a simple scraping bot using Python. Python has numerous libraries, including requests, beautifulsoup, selenium, scrapy, and pandas, that make it easy to develop scraping software.

You will write a Python Web scraper that downloads IMDB’s Top 250 dataset on movies (movie name, initial release, director name, and stars). IMDb, or Internet Movie Database, is an online database of information related to movies, TV programs, home videos, video games, and online streaming content. It includes data such as cast, production crew, personal biographies, plot summaries, trivia, ratings, and critical reviews.

But first you will need to install a few Python libraries:

  1. requests: The first step in any web scraping workflow is to send an HTTP request to the webserver to display the data in display on the target page. Requests is a Python library that aims to simplify the process of sending HTTP requests to a specified URL. When you make a request to a URI, it returns a response.
  2. html5lib: A Python library for parsing HTML. It is designed to conform to the WHATWG HTML specification, as is implemented by all major web browsers.
  3. bs4: BeautifulSoup is a Python web scraping framework that lets you conveniently parse HTML. So because BeautifulSoup can only parse data and can’t retrieve the web pages themselves, it’s often used with the requests library. Therefore, when the requests library sends an HTTP request to the web page, once it’s successfully submitted and returned, BeautifulSoup can be used to parse the data
  4. pandas: A library built on the NumPy library which provides various data structures and operators to manipulate numerical data.
  5. lxml : A Python library for processing/parsing XML and HTML.

Steps

Steps to implement web scraping in python to extract IMDb movies and their ratings:

Step 1: Install the requisite Python libraries

$ pip install requests beautifulsoup4 html5lib pandas lxml

Step 2: Import the required libraries

Open a text editor of your choice and paste the following code:

from bs4 import BeautifulSoup
import requests
import re
import pandas as pd

Step 3: Access the HTML content from the webpage

You can access the HTML content from the webpage by assigning its URL and creating a soap object as follows:

# Downloading imdb's top 250 movie's data from the webpage

url = 'http://www.imdb.com/chart/top'
response = requests.get(url)

soup = BeautifulSoup(response.text, "html.parser")

Step 4: Extract the movie details

In the code snippet below, we are extracting data from the BeautifulSoup object using Html tags like title, href, etc.

movies = soup.select('td.titleColumn')

crew = [a.attrs.get('title') for a in soup.select('td.titleColumn a')]

ratings = [b.attrs.get('data-value')

        for b in soup.select('td.posterColumn span[name=ir]')]

Step 5: Store the movie details

After extracting the movie details, you will instantiate an empty list,store the details in a dictionary, and lastly add them to the list.

# create an empty list for storing
# movie details

list = []
 
# Iterating over movies to extract
# each movie's metadata

for index in range(0, len(movies)):

     

    # Separating movie into: 'place',

    # 'title', 'year'

    movie_string = movies[index].get_text()

    movie = (' '.join(movie_string.split()).replace('.', ''))

    movie_title = movie[len(str(index))+1:-7]

    year = re.search('\((.*?)\)', movie_string).group(1)

    place = movie[:len(str(index))-(len(movie))]

    data = {"place": place,

            "movie_title": movie_title,

            "rating": ratings[index],

            "year": year,

            "star_cast": crew[index],

            }

    list.append(data)

Step 6: Display the movie details

With our list now populated with top IMDB movies and their metadata, it’s time to display the details.

for movie in list:

    print(movie['place'], '-', movie['movie_title'], '('+movie['year'] +

          ') -', 'Starring:', movie['star_cast'], movie['rating'])

Step 7: Save the data in a CSV file

The following lines of code will save the data into a .csv file.

#saving the list as dataframe
#then converting into .csv file

df = pd.DataFrame(list)

df.to_csv('imdb_top_250_movies.csv',index=False)

Complete Code:

from bs4 import BeautifulSoup

import requests

import re

import pandas as pd
 
 
# Downloading imdb top 250 movie's data

url = 'http://www.imdb.com/chart/top'

response = requests.get(url)

soup = BeautifulSoup(response.text, "html.parser")

movies = soup.select('td.titleColumn')

crew = [a.attrs.get('title') for a in soup.select('td.titleColumn a')]

ratings = [b.attrs.get('data-value')

        for b in soup.select('td.posterColumn span[name=ir]')]
 
 
 
 
# create a empty list for storing
# movie information

list = []
 
# Iterating over movies to extract
# each movie's details

for index in range(0, len(movies)):

     

    # Separating movie into: 'place',

    # 'title', 'year'

    movie_string = movies[index].get_text()

    movie = (' '.join(movie_string.split()).replace('.', ''))

    movie_title = movie[len(str(index))+1:-7]

    year = re.search('\((.*?)\)', movie_string).group(1)

    place = movie[:len(str(index))-(len(movie))]

    data = {"place": place,

            "movie_title": movie_title,

            "rating": ratings[index],

            "year": year,

            "star_cast": crew[index],

            }

    list.append(data)
 
# printing movie details with its rating.

for movie in list:

    print(movie['place'], '-', movie['movie_title'], '('+movie['year'] +

        ') -', 'Starring:', movie['star_cast'], movie['rating'])
 
 
##.......##

df = pd.DataFrame(list)

df.to_csv('imdb_top_250_movies.csv',index=False)

Output:

Run this code in your terminal or IDE. A csv file is saved with data displayed as in the following image:

Conclusion

To sum everything up, web scraping is an automated process of data collection. Companies may use it for different purposes, such as generating leads, competitive data mining, stock market analysis, etc. Web scraping is a legal activity as long as it does not break any laws regarding the source targets or data itself. However, before engaging in any sort of web scraping activity, you should get professional legal advice regarding your specific situation. You also have to consider all the possible risks of web scraping carelessly such as getting blocked. That’s pretty much it about web scraping. However, there are still a lot of things to explore on the topic and I suggest you familiarize yourself with some of the most common scraping techniques and sharpen your Python programming skills while you’re at it! If you have any questions, don’t hesitate to drop us a line in the comment section.

If you wish to engage in web scraping but lack adequate time or skills, you can access the help you need on Next Idea Tech. Get started by hiring our web scraping experts today.

No comments to show.
Jeremiah

Recent Posts

Business Benefits of Digital Transformation

Digital transformation of business operations worldwide is driving demand for technically talented workers. However, organizations…

4 weeks ago

The Cost Savings and Business Benefits of Next Idea Tech

This post provides readers with a framework for evaluating Next Idea Tech's potential financial impact…

4 weeks ago

Increase Developer Productivity With Generative AI

Generative AI promises to rewrite the way software is built and maintained and technology leaders…

4 weeks ago

What is a Nearshore LatAm Development Center

A nearshore LatAm Development Centre is a dedicated facility located in Latin America that as…

2 months ago

How to Build LatAm Nearshore Development Teams

Building a software development team, regardless of location, presents its own unique challenges. The prospect…

2 months ago

How to Outsource Software Developers from LatAm

Outsourcing software developers from LatAm can be a real game-changer for fast-growing small and medium-sized…

2 months ago

This website uses cookies.