If you have spent any time reading Python code — whether it is a Flask web app, a Django project, or a well-tested library — you have seen the @ symbol sitting above function definitions. That is a decorator, and it is one of the most powerful and elegant features in the Python language. Decorators let you modify or extend the behavior of functions and classes without changing their source code. They are the backbone of cross-cutting concerns like logging, authentication, caching, rate limiting, and input validation. Once you truly understand decorators, you will write cleaner, more reusable, and more Pythonic code.
In this tutorial, we will build decorators from the ground up — starting with the prerequisite concepts, moving through simple and advanced patterns, and finishing with real-world examples you can drop into production code today.
Before we dive into decorators, you need to be comfortable with two foundational concepts: first-class functions and closures. If you have read the Python – Function tutorial, you already know that Python functions are first-class objects. Here is a quick recap.
First-class functions mean you can assign functions to variables, pass them as arguments, and return them from other functions — just like any other value.
def greet(name):
return f"Hello, {name}!"
# Assign to a variable
say_hello = greet
print(say_hello("Folau")) # Hello, Folau!
# Pass as an argument
def call_func(func, arg):
return func(arg)
print(call_func(greet, "World")) # Hello, World!
A closure is a function that remembers the variables from the enclosing scope even after that scope has finished executing. This is what makes decorators possible.
def make_greeter(greeting):
def greeter(name):
return f"{greeting}, {name}!"
return greeter
hello = make_greeter("Hello")
good_morning = make_greeter("Good morning")
print(hello("Folau")) # Hello, Folau!
print(good_morning("Folau")) # Good morning, Folau!
The inner function greeter “closes over” the greeting variable. Even after make_greeter returns, the inner function retains access to greeting. This is exactly the mechanism decorators rely on.
A decorator is simply a function that takes another function as its argument, wraps it with additional behavior, and returns the wrapper. Let us build one step by step.
def my_decorator(func):
def wrapper():
print("Something is happening before the function is called.")
func()
print("Something is happening after the function is called.")
return wrapper
def say_hello():
print("Hello!")
# Manually apply the decorator
say_hello = my_decorator(say_hello)
say_hello()
# Output:
# Something is happening before the function is called.
# Hello!
# Something is happening after the function is called.
Here is what happens: my_decorator receives the original say_hello function, defines a wrapper that adds behavior before and after calling func(), and returns that wrapper. When we reassign say_hello = my_decorator(say_hello), the name say_hello now points to wrapper. Every subsequent call to say_hello() runs the wrapper code.
Writing say_hello = my_decorator(say_hello) every time is verbose. Python provides syntactic sugar with the @ symbol. The following two approaches are identical.
# Without @ syntax
def say_hello():
print("Hello!")
say_hello = my_decorator(say_hello)
# With @ syntax (identical behavior)
@my_decorator
def say_hello():
print("Hello!")
The @my_decorator line is just shorthand. When Python sees it, it calls my_decorator(say_hello) and rebinds the name say_hello to whatever the decorator returns. Clean, readable, and Pythonic.
Of course, most real functions accept arguments. A proper decorator must handle arbitrary arguments using *args and **kwargs.
def my_decorator(func):
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}")
result = func(*args, **kwargs)
print(f"{func.__name__} returned {result}")
return result
return wrapper
@my_decorator
def add(a, b):
return a + b
print(add(3, 5))
# Output:
# Calling add
# add returned 8
# 8
By accepting *args and **kwargs, the wrapper forwards any positional and keyword arguments to the original function. Always capture and return the result of func(*args, **kwargs) — otherwise you will silently swallow the return value.
There is a subtle problem with our decorator. After decoration, the function’s __name__, __doc__, and other metadata point to the wrapper, not the original function.
def my_decorator(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@my_decorator
def say_hello():
"""Greet the user."""
print("Hello!")
print(say_hello.__name__) # wrapper (not 'say_hello'!)
print(say_hello.__doc__) # None (not 'Greet the user.'!)
This breaks introspection, help() output, debugging tools, and any framework that relies on function names (like Flask route registration). The fix is functools.wraps, which copies the original function’s metadata onto the wrapper.
import functools
def my_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@my_decorator
def say_hello():
"""Greet the user."""
print("Hello!")
print(say_hello.__name__) # say_hello
print(say_hello.__doc__) # Greet the user.
Always use @functools.wraps(func) in every decorator you write. This is non-negotiable. It preserves __name__, __doc__, __module__, __qualname__, __dict__, and __wrapped__ (which gives access to the original unwrapped function).
Sometimes you need to configure a decorator. For example, you might want a retry decorator where you specify the number of retries, or a logging decorator where you specify the log level. This requires an extra layer of nesting — a function that returns a decorator.
import functools
def repeat(n):
"""Decorator that calls the function n times."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
result = None
for _ in range(n):
result = func(*args, **kwargs)
return result
return wrapper
return decorator
@repeat(3)
def say_hello(name):
print(f"Hello, {name}!")
say_hello("Folau")
# Output:
# Hello, Folau!
# Hello, Folau!
# Hello, Folau!
Here is the flow: repeat(3) is called first and returns decorator. Then Python calls decorator(say_hello), which returns wrapper. The name say_hello is rebound to wrapper. The triple nesting — outer function, decorator, wrapper — is the standard pattern for parameterized decorators.
Another practical example: a decorator that controls the log level.
import functools
import logging
def log_calls(level=logging.INFO):
"""Decorator that logs function calls at the specified level."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
logger = logging.getLogger(func.__module__)
logger.log(level, f"Calling {func.__name__} with args={args}, kwargs={kwargs}")
result = func(*args, **kwargs)
logger.log(level, f"{func.__name__} returned {result}")
return result
return wrapper
return decorator
@log_calls(level=logging.DEBUG)
def process_data(data):
return [x * 2 for x in data]
You can also implement decorators as classes by defining the __call__ method. This is useful when the decorator needs to maintain state across calls or when the logic is complex enough that a class provides better organization.
import functools
class CountCalls:
"""Decorator that counts how many times a function is called."""
def __init__(self, func):
functools.update_wrapper(self, func)
self.func = func
self.call_count = 0
def __call__(self, *args, **kwargs):
self.call_count += 1
print(f"{self.func.__name__} has been called {self.call_count} time(s)")
return self.func(*args, **kwargs)
@CountCalls
def say_hello(name):
print(f"Hello, {name}!")
say_hello("Folau")
say_hello("World")
say_hello("Python")
# Output:
# say_hello has been called 1 time(s)
# Hello, Folau!
# say_hello has been called 2 time(s)
# Hello, World!
# say_hello has been called 3 time(s)
# Hello, Python!
print(say_hello.call_count) # 3
Notice we use functools.update_wrapper(self, func) in __init__ instead of @functools.wraps (which is designed for functions, not classes). The effect is the same — it copies over __name__, __doc__, and other attributes.
Class-based decorators with arguments require a slightly different pattern:
import functools
class RateLimit:
"""Decorator that limits how often a function can be called."""
def __init__(self, max_calls, period=60):
self.max_calls = max_calls
self.period = period
self.calls = []
def __call__(self, func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
import time
now = time.time()
# Remove calls outside the time window
self.calls = [t for t in self.calls if now - t < self.period]
if len(self.calls) >= self.max_calls:
raise RuntimeError(
f"Rate limit exceeded: {self.max_calls} calls per {self.period}s"
)
self.calls.append(now)
return func(*args, **kwargs)
return wrapper
@RateLimit(max_calls=5, period=60)
def api_request(endpoint):
print(f"Requesting {endpoint}")
return {"status": "ok"}
When the decorator takes arguments (@RateLimit(max_calls=5, period=60)), __init__ receives the arguments and __call__ receives the function. When there are no arguments (@CountCalls), __init__ receives the function directly.
Python ships with several decorators that you should know and use regularly.
Turns a method into a read-only attribute, enabling getter/setter patterns without changing the calling syntax.
class Circle:
def __init__(self, radius):
self._radius = radius
@property
def radius(self):
return self._radius
@radius.setter
def radius(self, value):
if value < 0:
raise ValueError("Radius cannot be negative")
self._radius = value
@property
def area(self):
import math
return math.pi * self._radius ** 2
c = Circle(5)
print(c.radius) # 5
print(c.area) # 78.5398...
c.radius = 10 # Uses the setter
print(c.area) # 314.1592...
# c.radius = -1 # Raises ValueError
@classmethod receives the class as its first argument instead of an instance. It is commonly used for alternative constructors. @staticmethod does not receive the instance or the class — it is just a regular function namespaced inside the class.
class User:
def __init__(self, name, email):
self.name = name
self.email = email
@classmethod
def from_dict(cls, data):
"""Alternative constructor from a dictionary."""
return cls(data["name"], data["email"])
@classmethod
def from_string(cls, user_string):
"""Alternative constructor from 'name:email' format."""
name, email = user_string.split(":")
return cls(name.strip(), email.strip())
@staticmethod
def is_valid_email(email):
"""Validate email format (no instance or class needed)."""
return "@" in email and "." in email
# Using class methods
user1 = User.from_dict({"name": "Folau", "email": "folau@example.com"})
user2 = User.from_string("Folau : folau@example.com")
print(user1.name) # Folau
print(user2.name) # Folau
# Using static method
print(User.is_valid_email("folau@example.com")) # True
print(User.is_valid_email("invalid")) # False
Caches the return values of a function based on its arguments. This is incredibly useful for expensive computations or recursive algorithms.
import functools
@functools.lru_cache(maxsize=128)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
# Without caching, this would take exponential time
print(fibonacci(50)) # 12586269025
print(fibonacci(100)) # 354224848179261915075
# Inspect cache statistics
print(fibonacci.cache_info())
# CacheInfo(hits=98, misses=101, maxsize=128, currsize=101)
Since Python 3.9, you can also use @functools.cache as a simpler unbounded cache (equivalent to @lru_cache(maxsize=None)).
You can apply multiple decorators to a single function by stacking them. The decorators are applied bottom-up (the one closest to the function runs first), but they execute top-down when the function is called.
import functools
def bold(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return f"<b>{func(*args, **kwargs)}</b>"
return wrapper
def italic(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return f"<i>{func(*args, **kwargs)}</i>"
return wrapper
@bold
@italic
def greet(name):
return f"Hello, {name}"
print(greet("Folau"))
# Output: <b><i>Hello, Folau</i></b>
This is equivalent to greet = bold(italic(greet)). The italic decorator wraps the original function first, then bold wraps the result. When you call greet("Folau"), execution flows through bold's wrapper, then italic's wrapper, then the original function.
A more practical example: combining a timer and a logger.
import functools
import time
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.4f} seconds")
return result
return wrapper
def logger(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
print(f"[LOG] Calling {func.__name__}({args}, {kwargs})")
result = func(*args, **kwargs)
print(f"[LOG] {func.__name__} returned {result}")
return result
return wrapper
@timer
@logger
def compute_sum(n):
"""Compute the sum of numbers from 0 to n."""
return sum(range(n + 1))
compute_sum(1000000)
# Output:
# [LOG] Calling compute_sum((1000000,), {})
# [LOG] compute_sum returned 500000500000
# compute_sum took 0.0312 seconds
The order matters. Here, logger runs inside timer, so the timer measures both the logging overhead and the function execution. If you swap them, timer would run inside logger, and the logged result would include the timing output.
Now let us build decorators you will actually use in real projects. Each one solves a common cross-cutting concern.
Measures how long a function takes to execute. Essential for performance profiling.
import functools
import time
def timer(func):
"""Print the execution time of the decorated function."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
start_time = time.perf_counter()
result = func(*args, **kwargs)
end_time = time.perf_counter()
elapsed = end_time - start_time
print(f"[TIMER] {func.__name__} executed in {elapsed:.6f} seconds")
return result
return wrapper
@timer
def slow_computation(n):
"""Simulate a slow computation."""
total = 0
for i in range(n):
total += i ** 2
return total
result = slow_computation(1_000_000)
# [TIMER] slow_computation executed in 0.142356 seconds
print(result)
Automatically logs every function call with its arguments and return value.
import functools
import logging
logging.basicConfig(level=logging.DEBUG)
def log_calls(func):
"""Log function calls, arguments, and return values."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
args_repr = [repr(a) for a in args]
kwargs_repr = [f"{k}={v!r}" for k, v in kwargs.items()]
signature = ", ".join(args_repr + kwargs_repr)
logging.info(f"Calling {func.__name__}({signature})")
try:
result = func(*args, **kwargs)
logging.info(f"{func.__name__} returned {result!r}")
return result
except Exception as e:
logging.exception(f"{func.__name__} raised {type(e).__name__}: {e}")
raise
return wrapper
@log_calls
def divide(a, b):
return a / b
divide(10, 3) # INFO: Calling divide(10, 3)
# INFO: divide returned 3.3333333333333335
divide(10, 0) # INFO: Calling divide(10, 0)
# ERROR: divide raised ZeroDivisionError: division by zero
Retries a function on failure with increasing wait times. Perfect for network calls, API requests, and database connections.
import functools
import time
import random
def retry(max_retries=3, base_delay=1, backoff_factor=2, exceptions=(Exception,)):
"""Retry a function with exponential backoff on failure."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_exception = None
for attempt in range(max_retries + 1):
try:
return func(*args, **kwargs)
except exceptions as e:
last_exception = e
if attempt < max_retries:
# Exponential backoff with jitter
delay = base_delay * (backoff_factor ** attempt)
jitter = random.uniform(0, delay * 0.1)
wait_time = delay + jitter
print(
f"[RETRY] {func.__name__} failed (attempt {attempt + 1}/{max_retries}): {e}"
f" -- retrying in {wait_time:.2f}s"
)
time.sleep(wait_time)
else:
print(
f"[RETRY] {func.__name__} failed after {max_retries + 1} attempts"
)
raise last_exception
return wrapper
return decorator
@retry(max_retries=3, base_delay=1, exceptions=(ConnectionError, TimeoutError))
def fetch_data(url):
"""Simulate an unreliable network call."""
import random
if random.random() < 0.7:
raise ConnectionError("Connection refused")
return {"data": "success", "url": url}
# May succeed or fail depending on random chance
# result = fetch_data("https://api.example.com/data")
Checks if a user is authenticated and authorized before allowing access to a function.
import functools
def require_auth(role=None):
"""Decorator that checks authentication and optional role-based authorization."""
def decorator(func):
@functools.wraps(func)
def wrapper(user, *args, **kwargs):
# Check authentication
if not user.get("authenticated", False):
raise PermissionError(f"Authentication required for {func.__name__}")
# Check authorization (role)
if role and user.get("role") != role:
raise PermissionError(
f"Role '{role}' required for {func.__name__}. "
f"Current role: '{user.get('role')}'"
)
return func(user, *args, **kwargs)
return wrapper
return decorator
@require_auth(role="admin")
def delete_user(current_user, user_id):
print(f"User {user_id} deleted by {current_user['name']}")
return True
@require_auth()
def view_profile(current_user):
print(f"Viewing profile of {current_user['name']}")
return current_user
# Authenticated admin -- works
admin = {"name": "Folau", "authenticated": True, "role": "admin"}
delete_user(admin, user_id=42)
# Output: User 42 deleted by Folau
# Authenticated but wrong role -- raises PermissionError
viewer = {"name": "Guest", "authenticated": True, "role": "viewer"}
try:
delete_user(viewer, user_id=42)
except PermissionError as e:
print(e) # Role 'admin' required for delete_user. Current role: 'viewer'
# Not authenticated -- raises PermissionError
anonymous = {"name": "Anon", "authenticated": False}
try:
view_profile(anonymous)
except PermissionError as e:
print(e) # Authentication required for view_profile
Caches function results to avoid redundant computations. This is a simplified version of functools.lru_cache to show how caching works under the hood.
import functools
def memoize(func):
"""Cache function results based on arguments."""
cache = {}
@functools.wraps(func)
def wrapper(*args, **kwargs):
# Create a hashable key from args and kwargs
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
# Expose cache for inspection and clearing
wrapper.cache = cache
wrapper.clear_cache = cache.clear
return wrapper
@memoize
def expensive_computation(n):
"""Simulate an expensive computation."""
print(f"Computing for n={n}...")
import time
time.sleep(1) # Simulate slow operation
return sum(i ** 2 for i in range(n))
# First call -- computes and caches
result1 = expensive_computation(1000) # Computing for n=1000...
# Second call -- returns cached result instantly
result2 = expensive_computation(1000) # No output -- cached!
print(result1 == result2) # True
print(f"Cache size: {len(expensive_computation.cache)}") # 1
# Clear cache when needed
expensive_computation.clear_cache()
Prevents a function from being called more than a specified number of times within a time window. Essential for API clients.
import functools
import time
from collections import deque
def rate_limit(max_calls, period=60):
"""Limit function calls to max_calls within period seconds."""
def decorator(func):
call_times = deque()
@functools.wraps(func)
def wrapper(*args, **kwargs):
now = time.time()
# Remove timestamps outside the current window
while call_times and now - call_times[0] >= period:
call_times.popleft()
if len(call_times) >= max_calls:
wait_time = period - (now - call_times[0])
raise RuntimeError(
f"Rate limit exceeded for {func.__name__}. "
f"Try again in {wait_time:.1f} seconds."
)
call_times.append(now)
return func(*args, **kwargs)
return wrapper
return decorator
@rate_limit(max_calls=3, period=10)
def call_api(endpoint):
print(f"Calling {endpoint}")
return {"status": "ok"}
# These three calls succeed
call_api("/users") # Calling /users
call_api("/posts") # Calling /posts
call_api("/comments") # Calling /comments
# This fourth call within 10 seconds raises RuntimeError
try:
call_api("/tags")
except RuntimeError as e:
print(e) # Rate limit exceeded for call_api. Try again in 9.8 seconds.
Validates function arguments against expected types and custom rules before the function executes.
import functools
import inspect
def validate_types(**expected_types):
"""Validate that function arguments match the specified types."""
def decorator(func):
sig = inspect.signature(func)
@functools.wraps(func)
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
for param_name, value in bound.arguments.items():
if param_name in expected_types:
expected = expected_types[param_name]
if not isinstance(value, expected):
raise TypeError(
f"Argument '{param_name}' must be {expected.__name__}, "
f"got {type(value).__name__}"
)
return func(*args, **kwargs)
return wrapper
return decorator
@validate_types(name=str, age=int, email=str)
def create_user(name, age, email):
return {"name": name, "age": age, "email": email}
# Valid call
user = create_user("Folau", 30, "folau@example.com")
print(user) # {'name': 'Folau', 'age': 30, 'email': 'folau@example.com'}
# Invalid call -- raises TypeError
try:
create_user("Folau", "thirty", "folau@example.com")
except TypeError as e:
print(e) # Argument 'age' must be int, got str
You can also build more sophisticated validators that check ranges, patterns, or custom predicates.
import functools
def validate(rules):
"""Validate arguments using custom rule functions."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
# Combine args with parameter names
import inspect
sig = inspect.signature(func)
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
for param_name, check in rules.items():
if param_name in bound.arguments:
value = bound.arguments[param_name]
is_valid, message = check(value)
if not is_valid:
raise ValueError(f"Invalid '{param_name}': {message}")
return func(*args, **kwargs)
return wrapper
return decorator
# Define validation rules
def positive_number(value):
return (value > 0, f"must be positive, got {value}")
def non_empty_string(value):
return (isinstance(value, str) and len(value.strip()) > 0, "must be a non-empty string")
@validate({
"amount": positive_number,
"currency": non_empty_string,
})
def process_payment(amount, currency, description=""):
print(f"Processing {currency} {amount}: {description}")
return True
process_payment(99.99, "USD", description="Order #123")
# Processing USD 99.99: Order #123
try:
process_payment(-50, "USD")
except ValueError as e:
print(e) # Invalid 'amount': must be positive, got -50
Decorators are not just an academic exercise. They are used extensively in Python's most popular frameworks and libraries.
Flask uses decorators to map URL routes to handler functions.
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "Welcome to the homepage!"
@app.route("/users/<int:user_id>", methods=["GET"])
def get_user(user_id):
return f"User {user_id}"
@app.route("/api/data", methods=["POST"])
def create_data():
return {"status": "created"}, 201
Under the hood, @app.route("/") is a parameterized decorator. It registers the function in Flask's URL routing table.
Django provides decorators for authentication, HTTP method enforcement, and caching.
from django.contrib.auth.decorators import login_required
from django.views.decorators.http import require_http_methods
from django.views.decorators.cache import cache_page
@login_required
@require_http_methods(["GET"])
@cache_page(60 * 15) # Cache for 15 minutes
def dashboard(request):
return render(request, "dashboard.html")
Pytest uses decorators for test parametrization and marking.
import pytest
@pytest.fixture
def sample_user():
return {"name": "Folau", "email": "folau@example.com"}
@pytest.mark.parametrize("input_val,expected", [
(1, 1),
(2, 4),
(3, 9),
(4, 16),
])
def test_square(input_val, expected):
assert input_val ** 2 == expected
@pytest.mark.slow
def test_large_dataset():
# This test takes a long time to run
pass
Even experienced Python developers trip over these issues with decorators. Knowing them in advance will save you hours of debugging.
This is the most common mistake. Without @functools.wraps(func), the decorated function loses its identity.
# BAD -- no functools.wraps
def bad_decorator(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@bad_decorator
def my_function():
"""My function's docstring."""
pass
print(my_function.__name__) # wrapper (wrong!)
print(my_function.__doc__) # None (wrong!)
help(my_function) # Shows wrapper's help, not my_function's
# GOOD -- always use functools.wraps
import functools
def good_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@good_decorator
def my_function():
"""My function's docstring."""
pass
print(my_function.__name__) # my_function (correct!)
print(my_function.__doc__) # My function's docstring. (correct!)
When stacking decorators, order matters. The decorator closest to the function is applied first.
import functools
import time
def timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
print(f"Time: {time.perf_counter() - start:.4f}s")
return result
return wrapper
def require_login(func):
@functools.wraps(func)
def wrapper(user, *args, **kwargs):
if not user.get("authenticated"):
raise PermissionError("Login required")
return func(user, *args, **kwargs)
return wrapper
# CORRECT order: check auth BEFORE timing
@timer
@require_login
def get_dashboard(user):
time.sleep(0.1)
return "Dashboard data"
# WRONG order: timing includes auth check overhead
@require_login
@timer
def get_dashboard_wrong(user):
time.sleep(0.1)
return "Dashboard data"
Think about it like layers of an onion. The outermost decorator runs first when the function is called. Put cross-cutting concerns like timing and logging on the outside, and domain-specific checks like authentication closer to the function.
When decorating instance methods, remember that self is passed as the first argument. Your wrapper must handle it correctly through *args.
import functools
def log_method(func):
@functools.wraps(func)
def wrapper(*args, **kwargs): # 'self' is captured in *args
print(f"Calling {func.__qualname__}")
return func(*args, **kwargs)
return wrapper
class UserService:
@log_method
def get_user(self, user_id):
return {"id": user_id, "name": "Folau"}
service = UserService()
service.get_user(42) # Calling UserService.get_user
If your decorator explicitly names the first parameter (e.g., def wrapper(request, ...)), it will break when applied to a method because self will be passed as request. Always use *args, **kwargs to keep decorators generic.
A decorator that forgets to return func(*args, **kwargs) will cause the decorated function to always return None.
# BAD -- missing return
def bad_timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
func(*args, **kwargs) # Result is discarded!
print(f"Time: {time.perf_counter() - start:.4f}s")
# No return statement -- returns None!
return wrapper
# GOOD -- always return the result
def good_timer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs) # Capture result
print(f"Time: {time.perf_counter() - start:.4f}s")
return result # Return it!
return wrapper
1. Always use @functools.wraps(func): This preserves the original function's metadata. There is no excuse for skipping it.
2. Keep decorators simple and focused: A decorator should do one thing. If you need logging and authentication and caching, write three separate decorators and stack them. This follows the Single Responsibility Principle.
3. Accept *args and **kwargs: Always use *args and **kwargs in your wrapper function so the decorator works with any function signature.
4. Return the wrapped function's result: Always capture and return func(*args, **kwargs). Forgetting this is a silent bug that causes decorated functions to return None.
5. Document your decorator's behavior: Add a docstring to the decorator explaining what it does, what arguments it accepts (if parameterized), and any side effects. Someone reading @retry(max_retries=3) should be able to look at the decorator's docstring and immediately understand what will happen.
6. Test decorators independently: Write unit tests for your decorators separate from the functions they decorate. You can access the original function via __wrapped__ (provided by functools.wraps) when you need to test the undecorated version.
# Access the original function through __wrapped__
@my_decorator
def original_function():
return 42
# Test the decorator's behavior
assert original_function() is not None
# Test the original function without the decorator
assert original_function.__wrapped__() == 42
7. Be careful with stateful decorators: If your decorator maintains state (like a counter or cache), be aware that the state is shared across all calls. This can cause issues in multi-threaded applications. Use threading.Lock if thread safety is required.
8. Prefer function-based decorators for simplicity: Use class-based decorators only when you need to maintain significant state or when the logic is complex enough to benefit from class organization. For most use cases, function-based decorators are clearer.
@ syntax is just syntactic sugar.@functools.wraps(func) in every decorator to preserve the original function's __name__, __doc__, and other metadata.__call__ and are best when you need to maintain state across calls.@property, @classmethod, @staticmethod, and @functools.lru_cache.wraps, wrong decorator order, not returning results, and issues with methods vs functions.