The right way to Optimize Your Python Code Even If You’re a Newbie

The right way to Optimize Your Python Code Even If You’re a NewbieThe right way to Optimize Your Python Code Even If You’re a Newbie
Picture by Writer | Ideogram

 

Let’s be sincere. Whenever you’re studying Python, you are most likely not eager about efficiency. You are simply making an attempt to get your code to work! However this is the factor: making your Python code sooner would not require you to grow to be an professional programmer in a single day.

With a number of easy methods that I am going to present you immediately, you possibly can enhance your code’s velocity and reminiscence utilization considerably.

On this article, we’ll stroll by means of 5 sensible beginner-friendly optimization methods collectively. For every one, I am going to present you the “earlier than” code (the way in which many newcomers write it), the “after” code (the optimized model), and clarify precisely why the advance works and the way a lot sooner it will get.

🔗 Hyperlink to the code on GitHub

 

1. Change Loops with Listing Comprehensions

 
Let’s begin with one thing you most likely do on a regular basis: creating new lists by remodeling current ones. Most newcomers attain for a for loop, however Python has a a lot sooner approach to do that.

 

Earlier than Optimization

This is how most newcomers would sq. an inventory of numbers:

import time

def square_numbers_loop(numbers):
    consequence = [] 
    for num in numbers: 
        consequence.append(num ** 2) 
    return consequence

# Let's check this with 1000000 numbers to see the efficiency
test_numbers = listing(vary(1000000))

start_time = time.time()
squared_loop = square_numbers_loop(test_numbers)
loop_time = time.time() - start_time
print(f"Loop time: {loop_time:.4f} seconds")

 

This code creates an empty listing known as consequence, then loops by means of every quantity in our enter listing, squares it, and appends it to the consequence listing. Fairly easy, proper?

 

After Optimization

Now let’s rewrite this utilizing an inventory comprehension:

def square_numbers_comprehension(numbers):
    return [num ** 2 for num in numbers]  # Create all the listing in a single line

start_time = time.time()
squared_comprehension = square_numbers_comprehension(test_numbers)
comprehension_time = time.time() - start_time
print(f"Comprehension time: {comprehension_time:.4f} seconds")
print(f"Enchancment: {loop_time / comprehension_time:.2f}x sooner")

 

This single line [num ** 2 for num in numbers] does precisely the identical factor as our loop, but it surely’s telling Python “create an inventory the place every component is the sq. of the corresponding component in numbers.”

Output:

Loop time: 0.0840 seconds
Comprehension time: 0.0736 seconds
Enchancment: 1.14x sooner

 

Efficiency enchancment: Listing comprehensions are sometimes 30-50% sooner than equal loops. The advance is extra noticeable while you work with very massive iterables.

Why does this work? Listing comprehensions are carried out in C below the hood, in order that they keep away from lots of the overhead that comes with Python loops, issues like variable lookups and performance calls that occur behind the scenes.

 

2. Select the Proper Information Construction for the Job

 
This one’s enormous, and it is one thing that may make your code a whole bunch of occasions sooner with only a small change. The secret is understanding when to make use of lists versus units versus dictionaries.

 

Earlier than Optimization

To illustrate you wish to discover widespread components between two lists. This is the intuitive strategy:

def find_common_elements_list(list1, list2):
    widespread = []
    for merchandise in list1:  # Undergo every merchandise within the first listing
        if merchandise in list2:  # Test if it exists within the second listing
            widespread.append(merchandise)  # If sure, add it to our widespread listing
    return widespread

# Check with fairly massive lists
large_list1 = listing(vary(10000))     
large_list2 = listing(vary(5000, 15000))

start_time = time.time()
common_list = find_common_elements_list(large_list1, large_list2)
list_time = time.time() - start_time
print(f"Listing strategy time: {list_time:.4f} seconds")

 

This code loops by means of the primary listing, and for every merchandise, it checks if that merchandise exists within the second listing utilizing if merchandise in list2. The issue? Whenever you do merchandise in list2, Python has to look by means of all the second listing till it finds the merchandise. That is sluggish!

 

After Optimization

This is the identical logic, however utilizing a set for sooner lookups:

def find_common_elements_set(list1, list2):
    set2 = set(list2)  # Convert listing to a set (one-time value)
    return [item for item in list1 if item in set2]  # Test membership in set

start_time = time.time()
common_set = find_common_elements_set(large_list1, large_list2)
set_time = time.time() - start_time
print(f"Set strategy time: {set_time:.4f} seconds")
print(f"Enchancment: {list_time / set_time:.2f}x sooner")

 

First, we convert the listing to a set. Then, as an alternative of checking if merchandise in list2, we examine if merchandise in set2. This tiny change makes membership testing almost instantaneous.

Output:

Listing strategy time: 0.8478 seconds
Set strategy time: 0.0010 seconds
Enchancment: 863.53x sooner

 

Efficiency enchancment: This may be of the order of 100x sooner for big datasets.

Why does this work? Units use hash tables below the hood. Whenever you examine if an merchandise is in a set, Python would not search by means of each component; it makes use of the hash to leap on to the place the merchandise must be. It is like having a guide’s index as an alternative of studying each web page to seek out what you need.

 

3. Use Python’s Constructed-in Features Every time Doable

 
Python comes with tons of built-in features which are closely optimized. Earlier than you write your individual loop or customized perform to do one thing, examine if Python already has a perform for it.

 

Earlier than Optimization

This is the way you would possibly calculate the sum and most of an inventory in case you did not learn about built-ins:

def calculate_sum_manual(numbers):
    complete = 0
    for num in numbers:  
        complete += num     
    return complete

def find_max_manual(numbers):
    max_val = numbers[0] 
    for num in numbers[1:]: 
        if num > max_val:    
            max_val = num   
    return max_val

test_numbers = listing(vary(1000000))  

start_time = time.time()
manual_sum = calculate_sum_manual(test_numbers)
manual_max = find_max_manual(test_numbers)
manual_time = time.time() - start_time
print(f"Handbook strategy time: {manual_time:.4f} seconds")

 

The sum perform begins with a complete of 0, then provides every quantity to that complete. The max perform begins by assuming the primary quantity is the utmost, then compares each different quantity to see if it is larger.
 

After Optimization

This is the identical factor utilizing Python’s built-in features:

start_time = time.time()
builtin_sum = sum(test_numbers)    
builtin_max = max(test_numbers)    
builtin_time = time.time() - start_time
print(f"Constructed-in strategy time: {builtin_time:.4f} seconds")
print(f"Enchancment: {manual_time / builtin_time:.2f}x sooner")

 

That is it! sum() provides the full of all numbers within the listing, and max() returns the biggest quantity. Similar consequence, a lot sooner.

Output:

Handbook strategy time: 0.0805 seconds
Constructed-in strategy time: 0.0413 seconds
Enchancment: 1.95x sooner

 

Efficiency enchancment: Constructed-in features are sometimes sooner than guide implementations.

Why does this work? Python’s built-in features are written in C and closely optimized.

 

4. Carry out Environment friendly String Operations with Be part of

 
String concatenation is one thing each programmer does, however most newcomers do it in a approach that will get exponentially slower as strings get longer.

 

Earlier than Optimization

This is the way you would possibly construct a CSV string by concatenating with the + operator:

def create_csv_plus(information):
    consequence = ""  # Begin with an empty string
    for row in information:  # Undergo every row of knowledge
        for i, merchandise in enumerate(row):  # Undergo every merchandise within the row
            consequence += str(merchandise)  # Add the merchandise to our consequence string
            if i 

 

This code builds our CSV string piece by piece. For every row, it goes by means of every merchandise, converts it to a string, and provides it to our consequence. It provides commas between gadgets and newlines between rows.
 

After Optimization

This is the identical code utilizing the be part of methodology:

def create_csv_join(information):
    # For every row, be part of the gadgets with commas, then be part of all rows with newlines
    return "n".be part of(",".be part of(str(merchandise) for merchandise in row) for row in information)

start_time = time.time()
csv_join = create_csv_join(test_data)
join_time = time.time() - start_time
print(f"Be part of methodology time: {join_time:.4f} seconds")
print(f"Enchancment: {plus_time / join_time:.2f}x sooner")

 

This single line does lots! The inside half ",".be part of(str(merchandise) for merchandise in row) takes every row and joins all gadgets with commas. The outer half "n".be part of(...) takes all these comma-separated rows and joins them with newlines.

Output:

String concatenation time: 0.0043 seconds
Be part of methodology time: 0.0022 seconds
Enchancment: 1.94x sooner

 

Efficiency enchancment: String becoming a member of is way sooner than concatenation for big strings.

Why does this work? Whenever you use += to concatenate strings, Python creates a brand new string object every time as a result of strings are immutable. With massive strings, this turns into extremely wasteful. The be part of methodology figures out precisely how a lot reminiscence it wants upfront and builds the string as soon as.

 

5. Use Mills for Reminiscence-Environment friendly Processing

 
Typically you need not retailer all of your information in reminiscence without delay. Mills allow you to create information on-demand, which may save huge quantities of reminiscence.

 

Earlier than Optimization

This is the way you would possibly course of a big dataset by storing every thing in an inventory:

import sys

def process_large_dataset_list(n):
    processed_data = []  
    for i in vary(n):
        # Simulate some information processing
        processed_value = i ** 2 + i * 3 + 42
        processed_data.append(processed_value)  # Retailer every processed worth
    return processed_data

# Check with 100,000 gadgets
n = 100000
list_result = process_large_dataset_list(n)
list_memory = sys.getsizeof(list_result)
print(f"Listing reminiscence utilization: {list_memory:,} bytes")

 

This perform processes numbers from 0 to n-1, applies some calculation to every one (squaring it, multiplying by 3, and including 42), and shops all ends in an inventory. The issue is that we’re maintaining all 100,000 processed values in reminiscence without delay.

 

After Optimization

This is the identical processing utilizing a generator:

def process_large_dataset_generator(n):
    for i in vary(n):
        # Simulate some information processing
        processed_value = i ** 2 + i * 3 + 42
        yield processed_value  # Yield every worth as an alternative of storing it

# Create the generator (this does not course of something but!)
gen_result = process_large_dataset_generator(n)
gen_memory = sys.getsizeof(gen_result)
print(f"Generator reminiscence utilization: {gen_memory:,} bytes")
print(f"Reminiscence enchancment: {list_memory / gen_memory:.0f}x much less reminiscence")

# Now we are able to course of gadgets separately
complete = 0
for worth in process_large_dataset_generator(n):
    complete += worth
    # Every worth is processed on-demand and will be rubbish collected

 

The important thing distinction is yield as an alternative of append. The yield key phrase makes this a generator perform – it produces values separately as an alternative of making them all of sudden.

Output:

Listing reminiscence utilization: 800,984 bytes
Generator reminiscence utilization: 224 bytes
Reminiscence enchancment: 3576x much less reminiscence

 

Efficiency enchancment: Mills can use “a lot” much less reminiscence for big datasets.

Why does this work? Mills use lazy analysis, they solely compute values while you ask for them. The generator object itself is tiny; it simply remembers the place it’s within the computation.

 

Conclusion

 
Optimizing Python code would not need to be intimidating. As we have seen, small adjustments in the way you strategy widespread programming duties can yield dramatic enhancements in each velocity and reminiscence utilization. The secret is creating an instinct for choosing the proper device for every job.

Bear in mind these core rules: use built-in features once they exist, select acceptable information buildings to your use case, keep away from pointless repeated work, and be conscious of how Python handles reminiscence. Listing comprehensions, units for membership testing, string becoming a member of, mills for big datasets are all instruments that must be in each newbie Python programmer’s toolkit. Continue learning, maintain coding!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embrace DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and low! At present, she’s engaged on studying and sharing her information with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.