Timing Python Processes

Home / Python / Timing Python Processes

timing python processes

Timing Python processes is made possible with several different packages. One of the most common ways is using the standard library package, time. Here’s an example.

Suppose we want to scrape the HTML from some collection of links. In this case, we’re going to get a collection of URLs from Bloomberg’s homepage. To do this, we’ll use BeautifulSoup to get a list of full-path URLs. From the code below, this gives us a list of over 200 URLs. This first section of code should run pretty quickly; where timing a process comes in is if we wanted to cycle through some (or all) of these links and scrape the HTML from the respective pages.

# load packages
import time
from bs4 import BeautifulSoup
import requests

# get HTML of Bloomberg's website
html = requests.get("https://www.bloomberg.com/").content

# convert HTML to BeautifulSoup object
soup = BeautifulSoup(html, "lxml")

# find all links on Bloomberg's homepage
links = soup.find_all("a")

# get the URL of each link object
urls = [x.get("href") for x in links]

# filter out URLS that are None-values
urls = [x for x in urls if x is not None]

# get only the URLS with full path
with_http = [x for x in urls if "http" in x]

# check how many URLS we have

Let’s suppose we want to get the HTML of the first 50 webpages associated with our URLs. This could take a little bit of time, so let’s time it using the time package.

# get start epoch time
start = time.time()

# create an empty dictionary to store HTML
# from each page
html_store = {}

# get HTML from first 50 webpages
for url in with_http[:50]:
    html = requests.get(url).content
    html_store[url] = html

# get end epoch time
end = time.time()

# print number of seconds the above process took
print(end - start)

Above, we’re calling time.time(), which returns the current epoch time i.e. the number of seconds since January 1, 1970 (see here for reference). Since we call this immediately before running the for loop, and immediately after, we can just subtract these two values to get the number of seconds it took for our process to run.

Although this is useful in telling us how long our process takes, it doesn’t do anything to let us know how long the process is taking while it is actually running. We had to wait until the process is over to find out how long it took. There’s a solution to this conundrum using an amazing package, called tqdm. The tqdm package provides a progress status as a process is running to show how far Python has come in the loop. Let’s use it in our code from above.

# get tqdm method from tqdm package
from tqdm import tqdm

# create an empty dictionary to store HTML
# from each page
html_store = {}

# get HTML from first 50 webpages
for url in tqdm(with_http[:50]):
    html = requests.get(url).content
    html_store[url] = html

In the above code, we simply wrap the list of URLs we’re looping through in the tqdm method, which gets imported from the tqdm package.

Similarly we could use tqdm with list or dictionary comprehensions. Let’s change our code from above to work as a dictionary comprehension.

html_store = {url : requests.get(url).content for url in tqdm(with_http[:50])}

If you run either of the above pieces of code, you’ll see the process status being printed out. The count from 1 to 50 is shown, so you can see precisely how far your process has progressed. Also, the print out shows an estimate of how much time is left.

Please check out my other posts by clicking one of the links on the right side of the page, or by perusing through the archives here: http://theautomatic.net/blog/.

If you have requests for tutorials, please leave a note on the Contact page.