If you are preparing for a Python developer interview, whether for a junior, mid-level, or senior role, this guide is designed to help you sharpen your understanding of the language from the ground up. Python interviews tend to go beyond syntax trivia. Interviewers want to see that you understand why things work the way they do, not just how to use them. The questions below are organized by difficulty level and cover the concepts that come up most frequently in real-world technical interviews. Each question includes a thorough explanation, a practical code example, and insight into what the interviewer is really testing.
These questions test foundational Python knowledge. You should be able to answer these confidently for any Python role.
Python is a high-level, interpreted, dynamically-typed programming language created by Guido van Rossum. It emphasizes code readability through its clean syntax and significant whitespace. Python is widely used because of its gentle learning curve, massive standard library, and strong ecosystem for web development, data science, automation, and machine learning.
Why interviewers ask this: They want to see that you understand Python’s design philosophy and can articulate its strengths beyond just saying “it’s easy.”
PEP 8 is Python’s official style guide. It defines conventions for naming, indentation, line length, imports, and whitespace. Following PEP 8 matters because Python is a language that values readability, and consistent formatting across a codebase reduces cognitive load for every developer who reads it.
# PEP 8 compliant
def calculate_total_price(unit_price, quantity, tax_rate=0.08):
"""Calculate the total price including tax."""
subtotal = unit_price * quantity
return subtotal * (1 + tax_rate)
# Not PEP 8 compliant
def calculateTotalPrice(unitPrice,quantity,taxRate=0.08):
subtotal=unitPrice*quantity
return subtotal*(1+taxRate)
Why interviewers ask this: They want to know if you write professional, team-friendly code or if you treat formatting as an afterthought.
Lists are mutable sequences (you can add, remove, or change elements), while tuples are immutable (once created, they cannot be modified). Lists use square brackets and tuples use parentheses. Because tuples are immutable, they are hashable and can be used as dictionary keys. Tuples also have a slight performance advantage due to their fixed size.
my_list = [1, 2, 3]
my_tuple = (1, 2, 3)
my_list[0] = 10 # Valid - lists are mutable
print(my_list) # [10, 2, 3]
# my_tuple[0] = 10 # TypeError: 'tuple' object does not support item assignment
# Tuples can be dictionary keys; lists cannot
coordinates = {(0, 0): "origin", (1, 2): "point_a"}
print(coordinates[(0, 0)]) # "origin"
Why interviewers ask this: This tests whether you understand mutability, which is fundamental to avoiding bugs in Python.
Single-line comments use the # symbol. Multi-line comments are typically done with consecutive # lines or with triple-quoted strings (docstrings). Note that triple-quoted strings used outside of a function or class definition are not true comments; they are string literals that Python evaluates and discards.
# This is a single-line comment
# This is a multi-line comment
# spread across multiple lines
# using the hash symbol
def calculate_area(radius):
"""
Calculate the area of a circle.
This is a docstring, not a comment. It becomes
part of the function's __doc__ attribute.
"""
import math
return math.pi * radius ** 2
print(calculate_area.__doc__)
Why interviewers ask this: They are checking whether you understand the difference between comments and docstrings, and whether you use documentation properly.
is and ==.== compares values (equality). is compares identity (whether two references point to the exact same object in memory). This distinction is critical when working with mutable objects.
a = [1, 2, 3] b = [1, 2, 3] c = a print(a == b) # True - same values print(a is b) # False - different objects in memory print(a is c) # True - c references the same object as a # CPython interns small integers, so this can be surprising: x = 256 y = 256 print(x is y) # True - CPython caches integers -5 to 256 x = 257 y = 257 print(x is y) # False - outside the cached range (in most contexts)
Why interviewers ask this: Confusing is with == is a common source of subtle bugs. Interviewers want to see that you understand object identity vs. equality.
Lambda functions are small, anonymous functions defined with the lambda keyword. They can take any number of arguments but contain only a single expression. They are most useful as short callbacks or key functions passed to higher-order functions like sorted(), map(), or filter().
# Basic lambda
add = lambda x, y: x + y
print(add(3, 5)) # 8
# Practical use: sorting a list of tuples by the second element
students = [("Alice", 88), ("Bob", 95), ("Charlie", 72)]
sorted_students = sorted(students, key=lambda s: s[1], reverse=True)
print(sorted_students)
# [('Bob', 95), ('Alice', 88), ('Charlie', 72)]
# Using with filter
numbers = [1, 2, 3, 4, 5, 6, 7, 8]
evens = list(filter(lambda n: n % 2 == 0, numbers))
print(evens) # [2, 4, 6, 8]
Why interviewers ask this: They want to see if you know when lambdas are appropriate and when a regular function would be clearer.
Python uses try, except, else, and finally blocks for exception handling. The try block contains code that might raise an exception. The except block catches specific exceptions. The else block runs only if no exception was raised. The finally block always runs, regardless of whether an exception occurred.
def divide(a, b):
try:
result = a / b
except ZeroDivisionError:
print("Cannot divide by zero.")
return None
except TypeError as e:
print(f"Invalid types: {e}")
return None
else:
print(f"Division successful: {result}")
return result
finally:
print("Operation complete.")
divide(10, 2)
# Division successful: 5.0
# Operation complete.
divide(10, 0)
# Cannot divide by zero.
# Operation complete.
# Raising custom exceptions
class InsufficientFundsError(Exception):
def __init__(self, balance, amount):
self.balance = balance
self.amount = amount
super().__init__(f"Cannot withdraw ${amount}. Balance: ${balance}")
def withdraw(balance, amount):
if amount > balance:
raise InsufficientFundsError(balance, amount)
return balance - amount
Why interviewers ask this: They are testing whether you write defensive code and understand the full exception handling flow, including the often-overlooked else and finally blocks.
pass statement?The pass statement is a no-op placeholder. It does nothing but satisfies Python’s requirement for a statement in a block. It is commonly used when defining empty classes, functions, or conditional branches that you plan to implement later.
# Placeholder for a function you haven't implemented yet
def process_payment(order):
pass # TODO: implement payment processing
# Empty class used as a custom exception
class ValidationError(Exception):
pass
# Placeholder in conditional logic
status = "pending"
if status == "approved":
pass # Handle approved case later
elif status == "rejected":
print("Order rejected")
Why interviewers ask this: This is a basic syntax question. They want to confirm you understand Python’s block structure.
These questions dig into Python’s internals, patterns, and standard library. Expect these in mid-level and senior interviews.
Both allow you to create sequences from iterables using a concise syntax, but they differ in memory behavior. A list comprehension builds the entire list in memory at once. A generator expression produces values lazily, one at a time, which is far more memory-efficient for large datasets.
import sys
# List comprehension - builds entire list in memory
squares_list = [x ** 2 for x in range(1_000_000)]
print(sys.getsizeof(squares_list)) # ~8 MB
# Generator expression - produces values on demand
squares_gen = (x ** 2 for x in range(1_000_000))
print(sys.getsizeof(squares_gen)) # ~200 bytes (just the generator object)
# Both support filtering
even_squares = [x ** 2 for x in range(20) if x % 2 == 0]
print(even_squares) # [0, 4, 16, 36, 64, 100, 144, 196, 256, 324]
# Dictionary and set comprehensions
names = ["Alice", "Bob", "Charlie", "Alice", "Bob"]
name_lengths = {name: len(name) for name in names}
unique_names = {name for name in names}
print(name_lengths) # {'Alice': 5, 'Bob': 3, 'Charlie': 7}
print(unique_names) # {'Alice', 'Bob', 'Charlie'}
Why interviewers ask this: They want to see if you think about memory efficiency and understand lazy evaluation, which is critical for processing large datasets.
*args and **kwargs?*args collects positional arguments into a tuple. **kwargs collects keyword arguments into a dictionary. Together, they allow functions to accept any number of arguments, which is essential for writing flexible APIs, decorators, and wrapper functions.
def log_call(func_name, *args, **kwargs):
print(f"Calling {func_name}")
print(f" Positional args: {args}")
print(f" Keyword args: {kwargs}")
log_call("create_user", "Alice", 30, role="admin", active=True)
# Calling create_user
# Positional args: ('Alice', 30)
# Keyword args: {'role': 'admin', 'active': True}
# Common pattern: forwarding arguments to another function
def make_request(method, url, **kwargs):
timeout = kwargs.pop("timeout", 30)
retries = kwargs.pop("retries", 3)
print(f"{method} {url} (timeout={timeout}, retries={retries})")
print(f"Additional options: {kwargs}")
make_request("GET", "/api/users", timeout=10, verify=False)
# GET /api/users (timeout=10, retries=3)
# Additional options: {'verify': False}
Why interviewers ask this: This is fundamental to writing Pythonic code. If you cannot explain *args and **kwargs, it signals a gap in your understanding of function signatures.
A decorator is a function that takes another function as input and returns a new function that extends or modifies its behavior. Decorators are Python’s implementation of the Decorator pattern and are used extensively in frameworks like Flask, Django, and pytest. The @decorator syntax is syntactic sugar for func = decorator(func).
import functools
import time
# A well-written decorator preserves the original function's metadata
def timing(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.4f}s")
return result
return wrapper
@timing
def slow_function():
"""This function simulates slow work."""
time.sleep(0.5)
return "done"
result = slow_function()
# slow_function took 0.5012s
# The @functools.wraps decorator preserves metadata
print(slow_function.__name__) # "slow_function" (not "wrapper")
print(slow_function.__doc__) # "This function simulates slow work."
# Decorator with arguments
def retry(max_attempts=3):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except Exception as e:
print(f"Attempt {attempt} failed: {e}")
if attempt == max_attempts:
raise
return wrapper
return decorator
@retry(max_attempts=3)
def unreliable_api_call():
import random
if random.random() < 0.7:
raise ConnectionError("Server unavailable")
return {"status": "ok"}
Why interviewers ask this: Decorators are one of Python's most powerful patterns. Interviewers want to see that you understand closures, higher-order functions, and functools.wraps.
An iterator is any object that implements the __iter__ and __next__ methods. A generator is a specific type of iterator created using a function with yield statements. Generators are simpler to write than manual iterators and automatically maintain their state between calls.
# Manual iterator (verbose)
class Countdown:
def __init__(self, start):
self.current = start
def __iter__(self):
return self
def __next__(self):
if self.current <= 0:
raise StopIteration
value = self.current
self.current -= 1
return value
# Generator (clean and concise)
def countdown(start):
while start > 0:
yield start
start -= 1
# Both produce the same result
for n in Countdown(5):
print(n, end=" ") # 5 4 3 2 1
print()
for n in countdown(5):
print(n, end=" ") # 5 4 3 2 1
# Generators are lazy - great for large or infinite sequences
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
# Get the first 10 Fibonacci numbers
import itertools
first_10 = list(itertools.islice(fibonacci(), 10))
print(first_10) # [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
Why interviewers ask this: Generators reveal your understanding of lazy evaluation, memory management, and the iterator protocol. Senior developers use them heavily for data pipelines.
with statement.Context managers handle resource setup and teardown automatically. The with statement guarantees that cleanup code runs even if an exception occurs. You can create context managers using the __enter__/__exit__ protocol or the contextlib.contextmanager decorator.
from contextlib import contextmanager
# Using the with statement for file handling
with open("example.txt", "w") as f:
f.write("Hello, World!")
# File is automatically closed here, even if an exception occurred
# Custom context manager using a class
class DatabaseConnection:
def __init__(self, connection_string):
self.connection_string = connection_string
self.connection = None
def __enter__(self):
print(f"Connecting to {self.connection_string}")
self.connection = {"status": "connected"} # Simulated
return self.connection
def __exit__(self, exc_type, exc_val, exc_tb):
print("Closing database connection")
self.connection = None
return False # Do not suppress exceptions
with DatabaseConnection("postgresql://localhost/mydb") as conn:
print(f"Connection status: {conn['status']}")
# Custom context manager using a generator (simpler)
@contextmanager
def timer(label):
import time
start = time.perf_counter()
try:
yield
finally:
elapsed = time.perf_counter() - start
print(f"{label}: {elapsed:.4f}s")
with timer("Data processing"):
total = sum(range(1_000_000))
Why interviewers ask this: Context managers are essential for resource management. Interviewers want to know if you handle connections, locks, and files safely.
__str__ and __repr__?__str__ returns a human-readable string intended for end users. __repr__ returns an unambiguous string intended for developers, ideally one that could recreate the object. When you call print(), Python uses __str__. When you inspect an object in the REPL or in a debugger, Python uses __repr__. If __str__ is not defined, Python falls back to __repr__.
class Money:
def __init__(self, amount, currency="USD"):
self.amount = amount
self.currency = currency
def __str__(self):
return f"${self.amount:.2f} {self.currency}"
def __repr__(self):
return f"Money({self.amount!r}, {self.currency!r})"
price = Money(19.99)
print(str(price)) # $19.99 USD (for end users)
print(repr(price)) # Money(19.99, 'USD') (for developers)
# In a list, Python uses __repr__
prices = [Money(9.99), Money(24.99, "EUR")]
print(prices) # [Money(9.99, 'USD'), Money(24.99, 'EUR')]
Why interviewers ask this: This checks whether you write classes that are easy to debug and log. Good __repr__ implementations save hours of debugging time.
A shallow copy creates a new object but inserts references to the same nested objects. A deep copy creates a new object and recursively copies all nested objects. This distinction matters when you have mutable objects nested inside other mutable objects.
import copy # Shallow copy original = [[1, 2, 3], [4, 5, 6]] shallow = copy.copy(original) shallow[0][0] = 999 print(original[0][0]) # 999 - the nested list is shared! # Deep copy original = [[1, 2, 3], [4, 5, 6]] deep = copy.deepcopy(original) deep[0][0] = 999 print(original[0][0]) # 1 - completely independent copy # Common shallow copy shortcuts my_list = [1, 2, 3] copy_1 = my_list[:] # Slice copy_2 = list(my_list) # Constructor copy_3 = my_list.copy() # .copy() method # All three are shallow copies # For flat lists (no nested mutables), shallow copy is fine
Why interviewers ask this: Confusing shallow and deep copies causes some of the most frustrating bugs in Python. This question tests whether you understand reference semantics.
A class is a blueprint that defines attributes and methods. An object (or instance) is a specific realization of that blueprint with actual data. In Python, classes are themselves objects (everything in Python is an object), which is why you can pass classes around as arguments and store them in variables.
class BankAccount:
"""A class is the blueprint."""
interest_rate = 0.02 # Class attribute - shared by all instances
def __init__(self, owner, balance=0):
self.owner = owner # Instance attribute - unique to each object
self.balance = balance
def deposit(self, amount):
self.balance += amount
return self.balance
def __repr__(self):
return f"BankAccount({self.owner!r}, balance={self.balance})"
# Objects are instances of the class
account_1 = BankAccount("Alice", 1000)
account_2 = BankAccount("Bob", 500)
account_1.deposit(250)
print(account_1) # BankAccount('Alice', balance=1250)
print(account_2) # BankAccount('Bob', balance=500)
# Both share the class attribute
print(account_1.interest_rate) # 0.02
print(account_2.interest_rate) # 0.02
Why interviewers ask this: This is foundational OOP. They want to confirm you understand instantiation and the relationship between class-level and instance-level attributes.
Python supports single inheritance, multiple inheritance, and multilevel inheritance. The super() function delegates method calls to a parent class in the Method Resolution Order (MRO). Python uses the C3 linearization algorithm to determine the MRO, which prevents the diamond problem ambiguity found in some other languages.
# Single inheritance
class Animal:
def __init__(self, name):
self.name = name
def speak(self):
raise NotImplementedError("Subclasses must implement speak()")
class Dog(Animal):
def speak(self):
return f"{self.name} says Woof!"
class Cat(Animal):
def speak(self):
return f"{self.name} says Meow!"
# Multiple inheritance
class Pet:
def __init__(self, owner):
self.owner = owner
class PetDog(Dog, Pet):
def __init__(self, name, owner):
Dog.__init__(self, name)
Pet.__init__(self, owner)
def info(self):
return f"{self.name} belongs to {self.owner}"
buddy = PetDog("Buddy", "Alice")
print(buddy.speak()) # Buddy says Woof!
print(buddy.info()) # Buddy belongs to Alice
# Check the Method Resolution Order
print(PetDog.__mro__)
# (PetDog, Dog, Animal, Pet, object)
Why interviewers ask this: They want to verify you understand the MRO and can reason about method resolution in complex inheritance hierarchies.
Always use the with statement for file operations to guarantee proper resource cleanup. Python supports reading, writing, and appending in both text and binary modes.
# Writing to a file
with open("output.txt", "w") as f:
f.write("Line 1\n")
f.write("Line 2\n")
# Reading the entire file
with open("output.txt", "r") as f:
content = f.read()
print(content)
# Reading line by line (memory efficient for large files)
with open("output.txt", "r") as f:
for line in f:
print(line.strip())
# Appending to a file
with open("output.txt", "a") as f:
f.write("Line 3\n")
# Working with JSON
import json
data = {"name": "Alice", "scores": [95, 87, 92]}
with open("data.json", "w") as f:
json.dump(data, f, indent=2)
with open("data.json", "r") as f:
loaded = json.load(f)
print(loaded["name"]) # Alice
Why interviewers ask this: File handling is a daily task. They want to see that you use context managers and know the difference between read modes.
These questions test deep understanding of Python internals, concurrency, design patterns, and performance. They separate experienced developers from those who have only scratched the surface.
The GIL is a mutex in CPython that allows only one thread to execute Python bytecode at a time. It exists because CPython's memory management (reference counting) is not thread-safe. The GIL means that CPU-bound multi-threaded Python programs do not achieve true parallelism. However, the GIL is released during I/O operations, so multi-threaded programs that are I/O-bound (network calls, file reads) can still benefit from threading.
import threading
import time
# CPU-bound task - GIL prevents true parallelism with threads
def cpu_bound(n):
total = 0
for i in range(n):
total += i * i
return total
# Single-threaded
start = time.perf_counter()
cpu_bound(10_000_000)
cpu_bound(10_000_000)
single_time = time.perf_counter() - start
print(f"Single-threaded: {single_time:.2f}s")
# Multi-threaded (NOT faster due to the GIL)
start = time.perf_counter()
t1 = threading.Thread(target=cpu_bound, args=(10_000_000,))
t2 = threading.Thread(target=cpu_bound, args=(10_000_000,))
t1.start()
t2.start()
t1.join()
t2.join()
threaded_time = time.perf_counter() - start
print(f"Multi-threaded: {threaded_time:.2f}s") # Similar or slower!
Why interviewers ask this: The GIL is one of the most important things to understand about CPython's concurrency model. Senior developers must know when to use threads vs. processes.
Use threading for I/O-bound tasks (waiting for network responses, reading files, database queries) because the GIL is released during I/O. Use multiprocessing for CPU-bound tasks (data processing, computation) because each process has its own Python interpreter and GIL, enabling true parallelism across CPU cores.
import threading
import multiprocessing
import time
import requests
# I/O-bound: threading is effective
def fetch_url(url):
response = requests.get(url, timeout=5)
return len(response.content)
urls = ["https://example.com"] * 5
# Threaded I/O (fast - threads release GIL during network I/O)
start = time.perf_counter()
threads = [threading.Thread(target=fetch_url, args=(url,)) for url in urls]
for t in threads:
t.start()
for t in threads:
t.join()
print(f"Threaded I/O: {time.perf_counter() - start:.2f}s")
# CPU-bound: multiprocessing achieves true parallelism
def heavy_computation(n):
return sum(i * i for i in range(n))
# Using multiprocessing Pool
if __name__ == "__main__":
with multiprocessing.Pool(processes=4) as pool:
results = pool.map(heavy_computation, [5_000_000] * 4)
print(f"Results: {[r // 1_000_000 for r in results]}")
Why interviewers ask this: This tests whether you can design concurrent systems appropriately. Choosing the wrong concurrency model leads to performance problems or bugs.
Python uses two mechanisms for memory management. The primary mechanism is reference counting: every object has a count of references pointing to it, and when that count reaches zero, the memory is immediately freed. The secondary mechanism is a cyclic garbage collector that detects and cleans up reference cycles (objects that reference each other but are no longer reachable from the program).
import sys
import gc
# Reference counting
a = [1, 2, 3]
print(sys.getrefcount(a)) # 2 (one for 'a', one for getrefcount's argument)
b = a
print(sys.getrefcount(a)) # 3
del b
print(sys.getrefcount(a)) # 2
# Circular references require the garbage collector
class Node:
def __init__(self, value):
self.value = value
self.next = None
# Create a circular reference
node1 = Node(1)
node2 = Node(2)
node1.next = node2
node2.next = node1 # Circular!
# Even after deleting references, refcount won't reach 0
del node1, node2
# The cyclic GC will eventually clean this up
# You can manually trigger garbage collection
collected = gc.collect()
print(f"Garbage collector freed {collected} objects")
# Check GC thresholds
print(gc.get_threshold()) # (700, 10, 10) - default thresholds
Why interviewers ask this: Senior developers need to understand memory behavior to write scalable applications and diagnose memory leaks.
__slots__ and when you would use it.By default, Python objects store their attributes in a __dict__ dictionary, which is flexible but memory-intensive. Defining __slots__ tells Python to use a fixed-size internal structure instead. This saves significant memory when creating millions of instances and provides slightly faster attribute access. The tradeoff is that you cannot add arbitrary attributes to instances.
import sys
class PointDict:
def __init__(self, x, y):
self.x = x
self.y = y
class PointSlots:
__slots__ = ("x", "y")
def __init__(self, x, y):
self.x = x
self.y = y
# Memory comparison
p1 = PointDict(1, 2)
p2 = PointSlots(1, 2)
print(sys.getsizeof(p1) + sys.getsizeof(p1.__dict__)) # ~200 bytes
print(sys.getsizeof(p2)) # ~56 bytes
# __slots__ prevents adding arbitrary attributes
p2.z = 3 # AttributeError: 'PointSlots' object has no attribute 'z'
Why interviewers ask this: This tests your understanding of Python's object model and your ability to optimize memory usage for performance-critical applications.
A metaclass is the class of a class. Just as a class defines how an instance behaves, a metaclass defines how a class behaves. The default metaclass is type. Metaclasses are an advanced feature used in frameworks (like Django's ORM and SQLAlchemy) to customize class creation, enforce constraints, or register classes automatically.
# Every class is an instance of 'type' print(type(int)) #print(type(str)) # # Custom metaclass class SingletonMeta(type): _instances = {} def __call__(cls, *args, **kwargs): if cls not in cls._instances: cls._instances[cls] = super().__call__(*args, **kwargs) return cls._instances[cls] class Database(metaclass=SingletonMeta): def __init__(self): self.connection = "connected" print("Database initialized") # Only one instance is ever created db1 = Database() # "Database initialized" db2 = Database() # No output - returns existing instance print(db1 is db2) # True
Why interviewers ask this: Metaclasses are rarely needed in everyday code, but understanding them demonstrates deep knowledge of Python's object model. Senior candidates should at least be able to explain what they are.
Descriptors are objects that define __get__, __set__, or __delete__ methods. They control what happens when an attribute is accessed, set, or deleted on another object. Properties, class methods, and static methods are all implemented using descriptors under the hood.
class Validated:
"""A descriptor that validates assigned values."""
def __init__(self, min_value=None, max_value=None):
self.min_value = min_value
self.max_value = max_value
def __set_name__(self, owner, name):
self.name = name
def __get__(self, obj, objtype=None):
if obj is None:
return self
return getattr(obj, f"_{self.name}", None)
def __set__(self, obj, value):
if self.min_value is not None and value < self.min_value:
raise ValueError(f"{self.name} must be >= {self.min_value}")
if self.max_value is not None and value > self.max_value:
raise ValueError(f"{self.name} must be <= {self.max_value}")
setattr(obj, f"_{self.name}", value)
class Product:
price = Validated(min_value=0)
quantity = Validated(min_value=0, max_value=10000)
def __init__(self, name, price, quantity):
self.name = name
self.price = price # Triggers Validated.__set__
self.quantity = quantity
item = Product("Widget", 9.99, 100)
print(item.price) # 9.99
# item.price = -5 # ValueError: price must be >= 0
Why interviewers ask this: Descriptors are the mechanism behind @property, @classmethod, and @staticmethod. Understanding them shows you grasp how Python's attribute access works internally.
unittest and pytest?unittest is Python's built-in testing framework, modeled after Java's JUnit. It requires subclassing TestCase and using assertion methods like assertEqual(). pytest is a third-party framework that uses plain assert statements, has a powerful fixture system, and supports plugins for parallel execution, coverage, and more. Most modern Python projects prefer pytest.
# unittest style
import unittest
class TestCalculator(unittest.TestCase):
def setUp(self):
self.calc_data = [1, 2, 3, 4, 5]
def test_sum(self):
self.assertEqual(sum(self.calc_data), 15)
def test_max(self):
self.assertEqual(max(self.calc_data), 5)
# pytest style (much cleaner)
import pytest
@pytest.fixture
def calc_data():
return [1, 2, 3, 4, 5]
def test_sum(calc_data):
assert sum(calc_data) == 15
def test_max(calc_data):
assert max(calc_data) == 5
# pytest parametrize - test multiple inputs cleanly
@pytest.mark.parametrize("input_val, expected", [
(1, 1),
(2, 4),
(3, 9),
(4, 16),
])
def test_square(input_val, expected):
assert input_val ** 2 == expected
Why interviewers ask this: Testing is non-negotiable in professional software development. They want to see that you have hands-on experience writing tests, not just running them.
Virtual environments create isolated Python installations where you can install packages without affecting the system Python or other projects. This prevents dependency conflicts and ensures reproducible builds. Every professional Python project should use one.
# Creating and using a virtual environment # $ python3 -m venv myproject_env # $ source myproject_env/bin/activate (Linux/Mac) # $ myproject_env\Scripts\activate (Windows) # Inside the venv, pip installs packages locally # $ pip install requests flask # $ pip freeze > requirements.txt # requirements.txt captures exact versions # requests==2.31.0 # flask==3.0.0 # Another developer reproduces the environment # $ python3 -m venv myproject_env # $ source myproject_env/bin/activate # $ pip install -r requirements.txt
Why interviewers ask this: If you cannot explain virtual environments, it signals that you have not worked on professional Python projects with dependency management.
Magic methods (or dunder methods, short for "double underscore") are special methods that Python calls implicitly. They let your objects work with built-in operators and functions. Some important ones beyond __init__, __str__, and __repr__:
class Vector:
def __init__(self, x, y):
self.x = x
self.y = y
def __add__(self, other):
return Vector(self.x + other.x, self.y + other.y)
def __mul__(self, scalar):
return Vector(self.x * scalar, self.y * scalar)
def __abs__(self):
return (self.x ** 2 + self.y ** 2) ** 0.5
def __eq__(self, other):
return self.x == other.x and self.y == other.y
def __len__(self):
return 2 # A 2D vector always has 2 components
def __getitem__(self, index):
if index == 0:
return self.x
elif index == 1:
return self.y
raise IndexError("Vector index out of range")
def __repr__(self):
return f"Vector({self.x}, {self.y})"
v1 = Vector(3, 4)
v2 = Vector(1, 2)
print(v1 + v2) # Vector(4, 6) - uses __add__
print(v1 * 3) # Vector(9, 12) - uses __mul__
print(abs(v1)) # 5.0 - uses __abs__
print(v1 == v2) # False - uses __eq__
print(len(v1)) # 2 - uses __len__
print(v1[0]) # 3 - uses __getitem__
Why interviewers ask this: Dunder methods define the Pythonic way to build objects that integrate seamlessly with the language. Mastery of these separates Python developers from people who write Python-flavored Java.
The async/await syntax enables cooperative multitasking for I/O-bound operations using a single thread. Unlike threads, coroutines give up control explicitly at await points, which avoids race conditions. Use asyncio when you need to handle many concurrent I/O operations (web servers, API clients, chat systems).
import asyncio
async def fetch_data(url, delay):
"""Simulate an async HTTP request."""
print(f"Fetching {url}...")
await asyncio.sleep(delay) # Non-blocking sleep
print(f"Done fetching {url}")
return {"url": url, "status": 200}
async def main():
# Run multiple I/O operations concurrently
tasks = [
fetch_data("https://api.example.com/users", 2),
fetch_data("https://api.example.com/orders", 1),
fetch_data("https://api.example.com/products", 3),
]
# asyncio.gather runs all tasks concurrently
results = await asyncio.gather(*tasks)
for result in results:
print(f" {result['url']} -> {result['status']}")
# Total time: ~3 seconds (not 6), because tasks run concurrently
asyncio.run(main())
Why interviewers ask this: Async programming is essential for high-performance Python applications. Interviewers want to see that you understand the event loop and know when async is the right tool.
f-strings instead of string concatenation. These details signal experience.yield for lazy iteration and with for resource management.functools.wraps, including decorators that accept arguments.unittest and pytest, and be able to write fixtures and parameterized tests.__slots__ are what separate Python developers from Python users.