Concurrency in Python: Making Your Programs Do More at Once

Alright, so you’ve been coding for a while now, and you’ve probably encountered a situation where you want your program to do multiple things at once. For example, you might want to download files, process images, or fetch data from multiple APIs simultaneously. In such cases, Concurrency in Python comes in like a superhero, saving the day by letting your program handle multiple tasks at the same time without breaking a sweat.

In this post, we’re going to dive deep into Concurrency in Python. We’ll explore what it is, why you should care about it, and how you can use Python’s tools to make your programs smarter, faster, and more efficient.

But before we jump in, if you haven’t checked out our last article on Testing and Debugging Python Code, you should definitely give it a read. It’s packed with tips and tools that will help you write clean, bug-free code—and we all know that writing great code is key to building awesome projects!

Now, let’s talk about Concurrency and how to master it in Python.


What is Concurrency?

Imagine you’re at a coffee shop, trying to order a drink. There are two ways this can go down:

  1. Sequentially: The barista takes your order, makes your coffee, hands it to you, and then takes the next order. Everything happens one after the other, which can be slow if the shop is busy.

  2. Concurrent: The barista takes your order, starts making your coffee, and while the coffee is brewing, they take other customers’ orders. Multiple things are happening at once, and the shop can handle more customers in the same amount of time.

Concurrency is the ability of your program to manage multiple tasks at the same time. It’s not necessarily about doing everything simultaneously, but it’s about making the most of your time by switching between tasks efficiently.

For example, if your program is downloading files from the web, you don’t have to wait for one file to finish before starting the next. Concurrency lets you download multiple files at once, saving time.


Why Should You Care About Concurrency?

You may be thinking: “But my program is already working fine. Why do I need concurrency?” Good question! Here are a few reasons why you should care about it:

  1. Faster Execution: With concurrency, you can run multiple tasks at once. This can speed up your program, especially when there are a lot of independent tasks that don’t depend on each other.

  2. Better Resource Utilization: If you have a computer with multiple cores or processors, concurrency lets you take advantage of them. This is perfect for CPU-heavy tasks like image processing or data analysis.

  3. Responsive Applications: If you’re building an app (say a web server or a chatbot), you want it to respond quickly. Concurrency can help your app stay responsive while doing long-running tasks, like downloading files or making database queries.


How Does Concurrency Work in Python?

Python offers a few tools to handle concurrency, each suited to different situations. Let’s break them down:

1. Threading

In Python, threads are the smallest unit of execution. You can think of threads as tiny workers that share the same resources (like memory) but can run independently. If you want to run multiple tasks that can work at the same time but don’t need heavy CPU resources, threading is a good option.

Example: Using Threading to Download Multiple Files

Let’s say you want to download multiple files at once. Using threading, you can download them in parallel instead of waiting for each one to finish before starting the next.

import threading
import requests

# Function to download a file
def download_file(url):
    response = requests.get(url)
    with open(url.split('/')[-1], 'wb') as f:
        f.write(response.content)
    print(f"Downloaded {url}")

# List of files to download
urls = [
    "https://example.com/file1.jpg",
    "https://example.com/file2.jpg",
    "https://example.com/file3.jpg"
]

# Create threads for each download
threads = []
for url in urls:
    thread = threading.Thread(target=download_file, args=(url,))
    threads.append(thread)
    thread.start()

# Wait for all threads to finish
for thread in threads:
    thread.join()

print("All downloads completed!")

What’s Happening Here?

  • We define a function download_file() to download a file from a URL.
  • We create a new thread for each file and start it, meaning all the downloads begin at the same time.
  • Finally, we call join() on each thread to wait for them to finish before printing “All downloads completed!”

Threading works great for tasks that don’t take up a lot of CPU time, like downloading files or waiting for data from a database, which is typical in I/O-bound tasks.

For more information on Python’s threading module, check out the official documentation.

2. Multiprocessing

While threading is great for I/O-bound tasks (like downloading files or waiting for user input), Python threads have a limitation when it comes to CPU-bound tasks. The Global Interpreter Lock (GIL) makes it so that only one thread can execute Python bytecode at a time. This means that threading won’t help much if you’re trying to do CPU-heavy tasks, like image processing or data analysis.

That’s where multiprocessing comes in. With multiprocessing, you create separate processes that run independently and can fully utilize multiple CPU cores. This is perfect for parallelizing tasks that require heavy computation.

Example: Using Multiprocessing to Process Data in Parallel

Let’s say you have a huge list of numbers, and you want to calculate their squares in parallel.

import multiprocessing

# Function to calculate the square of a number
def calculate_square(number):
    print(f"Square of {number} is {number ** 2}")

# List of numbers
numbers = [1, 2, 3, 4, 5]

# Create processes for each calculation
processes = []
for number in numbers:
    process = multiprocessing.Process(target=calculate_square, args=(number,))
    processes.append(process)
    process.start()

# Wait for all processes to finish
for process in processes:
    process.join()

print("All calculations completed!")

What’s Happening Here?

  • We define a function calculate_square() to calculate the square of a number.
  • We create a new process for each number and start it, meaning all the calculations happen at the same time.
  • We call join() to wait for all processes to finish.

Multiprocessing is ideal for tasks that need to run on multiple CPU cores. It’s especially useful for CPU-bound tasks, such as processing large datasets, image manipulation, or deep learning.

For more on the multiprocessing module, take a look at the official documentation.

3. Asyncio

If you’re working with I/O-bound tasks (like downloading files, reading from a database, or querying a web server), asyncio is a powerful tool that allows you to write concurrent code using an asynchronous style. Unlike threading or multiprocessing, which run tasks in parallel, asyncio runs tasks concurrently by switching between them as needed, without creating new threads or processes.

This is particularly useful when you have many tasks that are waiting for input or data, and you want to make efficient use of your time.

Example: Using Asyncio to Download Multiple Files Concurrently

import asyncio
import aiohttp

# Function to download a file asynchronously
async def download_file(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            with open(url.split('/')[-1], 'wb') as f:
                f.write(await response.read())
            print(f"Downloaded {url}")

# List of files to download
urls = [
    "https://example.com/file1.jpg",
    "https://example.com/file2.jpg",
    "https://example.com/file3.jpg"
]

# Run the download tasks concurrently
async def main():
    tasks = [download_file(url) for url in urls]
    await asyncio.gather(*tasks)

# Start the asyncio event loop
asyncio.run(main())

print("All downloads completed!")

What’s Happening Here?

  • We use aiohttp to make asynchronous HTTP requests.
  • download_file() is now an async function, and we use await to read the response without blocking the program.
  • The asyncio.gather() function allows us to run multiple tasks concurrently, and asyncio.run() starts the event loop.

Asyncio is excellent for I/O-bound tasks where you need to wait for things like web requests or database queries. It makes your program efficient by allowing it to perform other tasks while waiting for I/O operations to complete.

For more on asyncio, check out the official documentation.


Final Thoughts

Concurrency in Python is a super useful tool for writing faster, more efficient programs. Whether you’re handling multiple I/O tasks using threading or asyncio, or you’re running CPU-heavy tasks in parallel with multiprocessing, Python provides you with everything you need to make your programs do more at once.

Don’t forget to check out our previous article on (Working with APIs in Python)[/working-with-apis-in-python/].

The world of concurrency may seem complicated at first, but once you understand how to work with threads, processes, and async tasks, you’ll unlock a whole new level of efficiency and power in your Python programs.