My Brain Cells

Easiest (and best) learning materials for anyone with a curiosity for machine learning and artificial intelligence, Deep learning, Programming, and other fun life hacks.

Web Scraping with Python to Creating ML/AI Datasets

In today’s data-driven world, collecting and preparing data for machine learning is a crucial task. One of the most effective ways to obtain data for your ML projects is through web scraping. In this blog post, we will explore the fascinating world of web scraping using Python and guide you on how to create a dataset for machine learning. We’ll cover essential techniques and provide practical code examples to help you get started.

What is Web Scraping?

Web scraping is the process of extracting data from websites. It involves fetching web pages and then parsing the HTML to extract the information you need. Python is an excellent choice for web scraping due to its robust libraries and packages like BeautifulSoup and Requests, which simplify the process.


Before we dive into web scraping, make sure you have Python installed on your machine. You’ll also need to install the following Python libraries:

pip install requests
pip install beautifulsoup4
pip install pandas

Web Scraping Process

  1. Select Your Data Source: First, choose the website from which you want to scrape data. Ensure that the website’s terms of service allow web scraping.
  2. Inspect the Website: Open the website in your browser and use the browser’s developer tools to inspect the HTML structure. This will help you identify the data you want to extract.
  3. Send an HTTP Request: Use the Requests library to send an HTTP GET request to the website’s URL.
  4. Parse the HTML: Parse the HTML content of the page using BeautifulSoup, which makes it easy to navigate and extract data.
  5. Extract Data: Find and extract the specific data elements you need by navigating the HTML structure. This can involve using tags, classes, or other attributes.
  6. Store the Data: Store the extracted data in a structured format, such as a CSV file or a database, for later use in machine learning.

Example: Web Scraping with Python

Let’s demonstrate the web scraping process using a simple example. We’ll scrape a list of popular books from a fictional bookstore website.

import requests
from bs4 import BeautifulSoup
import pandas as pd

# Send an HTTP request to the website
url = ""
response = requests.get(url)

# Parse the HTML content
soup = BeautifulSoup(response.content, "html.parser")

# Extract book titles and authors
book_titles = []
authors = []

for book in soup.find_all("div", class_="book"):
    title = book.find("h2", class_="title").text
    author = book.find("p", class_="author").text

# Create a dataset
book_data = pd.DataFrame({"Title": book_titles, "Author": authors})

# Save the dataset to a CSV file
book_data.to_csv("popular_books.csv", index=False)

Data Cleaning and Preprocessing

Once you have scraped the data, it’s essential to clean and preprocess it before using it for machine learning. Data cleaning involves handling missing values, removing duplicates, and addressing outliers. Preprocessing might include data normalization, one-hot encoding, and feature scaling, depending on your ML model’s requirements.


Web scraping is a powerful technique for collecting data from the internet, and Python makes it accessible to developers and data scientists. In this blog post, we’ve covered the basics of web scraping and demonstrated how to create a dataset for machine learning using Python. Remember to respect the website’s terms of service and robots.txt file when scraping data, and always verify the legality and ethical implications of web scraping for a given website. Happy scraping and machine learning!


Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top