Automate a Redirect Map with Python: A Step-by-Step Guide
When managing large websites or overseeing website migrations, creating redirect maps is an essential task that can take a significant amount of time. If you’ve ever had to manually match hundreds or even thousands of URLs between an old and a new site, you know how time-consuming and error-prone it can be. This article will show you how to automate the process using Python, saving you hours of work and reducing the chances of mismatches.
Why Automate a Redirect Map?
Redirect maps are vital for maintaining SEO equity when URLs change. They help ensure that users and search engines can still find content even when it’s been moved or renamed. Manually building these maps is often challenging, especially for larger sites, but with automation, you can:
- Save time by eliminating manual URL comparisons.
- Reduce errors with a reliable script that identifies the best URL matches.
- Focus on more complex tasks by allowing automation to handle basic redirects.
How the Script Works
This script automates the matching process by analyzing content similarities between two sets of URLs. Here’s a breakdown:
- URL Import: Two TXT files are imported — one for the source website URLs and one for the target website URLs.
- Content Extraction: Using BeautifulSoup, the script scrapes and extracts the main body content from each URL, ignoring headers, footers, and irrelevant elements.
- Content Matching: The Python library PolyFuzz is used to calculate the similarity percentage between the content of the source and target URLs. This helps find the closest content match.
- Results Export: The results, including similarity percentages, are saved in a CSV file, allowing you to manually review low-similarity URLs and ensure accurate redirects.
The Python Script
Here’s the Python code you can use to automate your redirect map:
pythonCopy code# Import necessary libraries
from bs4 import BeautifulSoup, SoupStrainer
from polyfuzz import PolyFuzz
import concurrent.futures
import csv
import pandas as pd
import requests
# Import URLs
with open("source_urls.txt", "r") as file:
url_list_a = [line.strip() for line in file]
with open("target_urls.txt", "r") as file:
url_list_b = [line.strip() for line in file]
# Create a content scraper using BeautifulSoup
def get_content(url_argument):
page_source = requests.get(url_argument).text
strainer = SoupStrainer('p')
soup = BeautifulSoup(page_source, 'lxml', parse_only=strainer)
paragraph_list = [element.text for element in soup.find_all(strainer)]
content = " ".join(paragraph_list)
return content
# Scrape the URLs for content
with concurrent.futures.ThreadPoolExecutor() as executor:
content_list_a = list(executor.map(get_content, url_list_a))
content_list_b = list(executor.map(get_content, url_list_b))
content_dictionary = dict(zip(url_list_b, content_list_b))
# Get content similarities via PolyFuzz
model = PolyFuzz("TF-IDF")
model.match(content_list_a, content_list_b)
data = model.get_matches()
# Map similarity data back to URLs
def get_key(argument):
for key, value in content_dictionary.items():
if argument == value:
return key
return key
with concurrent.futures.ThreadPoolExecutor() as executor:
result = list(executor.map(get_key, data["To"]))
# Create a dataframe for the final results
to_zip = list(zip(url_list_a, result, data["Similarity"]))
df = pd.DataFrame(to_zip)
df.columns = ["From URL", "To URL", "% Identical"]
# Export to a CSV file
with open("redirect_map.csv", "w", newline="") as file:
columns = ["From URL", "To URL", "% Identical"]
writer = csv.writer(file)
writer.writerow(columns)
for row in to_zip:
writer.writerow(row)
How to Use the Script
- Prepare Your Data:
- Create two TXT files:
source_urls.txt
andtarget_urls.txt
. List the URLs of the old site (source) and the new site (target) in these files, one URL per line.
- Create two TXT files:
- Run the Script:
- Run the script, and it will extract the content from each URL, compare them, and generate a CSV file (
redirect_map.csv
) with the redirect matches and their similarity percentages.
- Run the script, and it will extract the content from each URL, compare them, and generate a CSV file (
- Review the Results:
- Manually check URLs with a low similarity percentage to ensure they redirect to the most appropriate content.
Final Thoughts
Automating a redirect map saves time, reduces human error, and allows SEOs and developers to focus on higher-level tasks. By using Python and libraries like BeautifulSoup and PolyFuzz, you can quickly build an efficient workflow for redirect management.
Implement this method today to streamline your site migrations and ensure a smooth user experience!