This is the final post in our series on how software engineering works. We have covered understanding the problem, finding and validating a solution, planning, designing, implementing, testing, and shipping an MVP. Now comes the part that separates amateur projects from professional products: iteration.
We continue with our running example — a task management app similar to a simplified Trello.
Your MVP is live. Users can create boards, add tasks, and move them between columns. But the real work starts now.
Software development is a cycle, not a straight line. Slack, GitHub, and Trello did not ship their current feature set on day one. They launched with a core experience and iterated relentlessly. If you treat your MVP as the finish line, your product will stagnate. Treat it as the starting line.
At the heart of iterative development is a tight feedback loop:
This is the engine of agile development. Keep the loop tight — days or weeks, not months. For our task app: build a notification system, measure whether users complete tasks faster, learn that in-app notifications outperform email, then iterate on the in-app design.
You cannot improve what you do not understand. Establish multiple feedback channels:
For our task app, users want drag-and-drop instead of dropdown menus, notifications on task assignment, and mobile support. Now you have a backlog driven by real usage.
You cannot build everything at once. Use an impact vs. effort matrix:
| Low Effort | High Effort | |
|---|---|---|
| High Impact | Do first (Quick Wins) | Plan carefully (Major Projects) |
| Low Impact | Fill gaps (Nice to Have) | Skip or defer (Money Pits) |
Notifications are high impact, low effort — a quick win. Drag-and-drop is high impact, medium effort. A full mobile redesign is high impact, high effort — plan it for next quarter. Dark mode is low impact, high effort — skip it. Saying “not now” is just as important as saying “yes.”
During MVP, you wrote code that was “good enough” to ship. As you iterate, pay down that technical debt. Here is an example: our MVP had a single method that creates a task and assigns it, violating the Single Responsibility Principle.
public class TaskService {
public Task createAndAssignTask(String title, String description,
String boardId, String assigneeId, String priority) {
if (title == null || title.trim().isEmpty()) {
throw new IllegalArgumentException("Title is required");
}
Task task = new Task();
task.setId(UUID.randomUUID().toString());
task.setTitle(title.trim());
task.setDescription(description);
task.setBoardId(boardId);
task.setPriority(priority != null ? priority : "MEDIUM");
task.setStatus("TODO");
task.setCreatedAt(LocalDateTime.now());
taskRepository.save(task);
// Assignment logic tightly coupled with creation
if (assigneeId != null) {
User assignee = userRepository.findById(assigneeId);
task.setAssigneeId(assigneeId);
taskRepository.update(task);
emailService.send(assignee.getEmail(), "New Task",
"You have been assigned: " + title);
}
return task;
}
}
public class TaskService {
private final TaskRepository taskRepository;
private final TaskAssignmentService assignmentService;
private final TaskValidator validator;
public TaskService(TaskRepository repo,
TaskAssignmentService assignmentService,
TaskValidator validator) {
this.taskRepository = repo;
this.assignmentService = assignmentService;
this.validator = validator;
}
public Task createTask(String title, String description,
String boardId, String priority) {
validator.validateNewTask(title, boardId);
Task task = new Task();
task.setId(UUID.randomUUID().toString());
task.setTitle(title.trim());
task.setDescription(description);
task.setBoardId(boardId);
task.setPriority(priority != null ? priority : "MEDIUM");
task.setStatus("TODO");
task.setCreatedAt(LocalDateTime.now());
return taskRepository.save(task);
}
public Task assignTask(String taskId, String assigneeId) {
Task task = taskRepository.findById(taskId);
return assignmentService.assign(task, assigneeId);
}
}
Now TaskService handles creation, TaskAssignmentService handles assignment and notifications, and TaskValidator handles validation. Each class has one reason to change, making the code easier to test and extend.
Ship small, ship often. Use these tools to deploy safely:
Here is a feature flag implementation for rolling out drag-and-drop:
class FeatureFlags:
def __init__(self, config_store):
self.config_store = config_store
def is_enabled(self, feature_name, user_id=None):
flag = self.config_store.get_flag(feature_name)
if flag is None or not flag.get("enabled", False):
return False
# Enabled for everyone
if flag.get("rollout_percentage", 0) == 100:
return True
# Beta testers on the allowlist
if user_id and user_id in flag.get("allowlist", []):
return True
# Percentage-based rollout
if user_id and flag.get("rollout_percentage", 0) > 0:
hash_val = hash(f"{feature_name}:{user_id}") % 100
return hash_val < flag["rollout_percentage"]
return False
# Usage in the task board view
flags = FeatureFlags(config_store)
def render_task_board(board_id, user_id):
board = get_board(board_id)
tasks = get_tasks(board_id)
drag_drop = flags.is_enabled("drag_and_drop", user_id)
return render_template("board.html",
board=board, tasks=tasks, enable_drag_drop=drag_drop)
Start with your internal team, expand to 10%, then 50%, then 100% — all without deploying new code. If issues appear, set rollout back to 0 and investigate.
Every iteration takes you back through the entire cycle from this series:
That is the complete cycle. Not a waterfall that ends at delivery, but a spiral that keeps climbing. Each pass builds on everything you learned before. The product gets better. The team gets sharper. The codebase matures.
“The best code is the code you improve, not the code you write.”
Embrace change. The first version of anything is a hypothesis. Your job is not to be right on the first try — it is to learn fast enough that each version is meaningfully better than the last. The engineers who thrive see every iteration as an opportunity.
If you have followed this entire series, you now understand the full lifecycle of building software — from a vague problem through to a product that keeps getting better. That understanding is what separates someone who can write code from someone who can engineer software.
Go build something. Then make it better.
This is post #10 in our How It Works series. Our running example is building a task management app — a simplified Trello for small dev teams. We have gone through understanding the problem, planning, wireframing, designing, implementing, and testing. Now it is time to ship something real.
An MVP — Minimum Viable Product — is the smallest version of your product that delivers value to real users. It is not a half-baked product. It is a focused product. You strip away everything that is not essential and ship only the core that solves the problem.
The concept comes from Eric Ries and the Lean Startup methodology. Instead of spending months building a fully featured product that nobody asked for, you build the minimum that lets you test whether your solution actually works in the real world. Then you measure, learn, and iterate.
The key word is viable. An MVP must work. It must be stable enough for real users in production. It must solve at least one problem well enough that someone would choose to use it.
Engineers confuse these constantly. A prototype tests feasibility — can we build this? Does the technology work? A prototype lives on your laptop or in a demo environment. Nobody depends on it.
An MVP tests market fit — do users want this? Will they use it? An MVP goes to real users in production. It has authentication, error handling, and monitoring. It might be small, but it is production-grade.
For our task app, a prototype might be a CLI script that creates tasks in a JSON file. The MVP is a deployed web application with a login page, a database, and an API that real team members use every day.
This is where discipline matters. You will be tempted to add features. Resist. Look at your user stories from earlier in the series and pick only the ones that address the core problem.
For our task management app, the MVP scope is:
That is it. No drag-and-drop Kanban boards. No Slack notifications. No GitHub integration. No file attachments. Every feature you cut from the MVP is a week you ship sooner.
Write your MVP scope down and tape it to your monitor. When someone says “wouldn’t it be cool if…” point at the list. If it is not on the list, it is not in the MVP.
An MVP that is not deployed is just a prototype with ambition. Get it in front of users. You need three things: deployment, infrastructure, and monitoring.
Keep deployment simple. A single server, a managed database, and a reverse proxy is enough. Do not build a Kubernetes cluster for ten users. A Spring Boot application with a health check and metrics endpoint gives you the operational visibility you need from day one:
@RestController
public class HealthController {
private final AtomicLong tasksCreated = new AtomicLong(0);
private final AtomicLong tasksCompleted = new AtomicLong(0);
@GetMapping("/health")
public Map<String, Object> health() {
Map<String, Object> status = new HashMap<>();
status.put("status", "UP");
status.put("timestamp", Instant.now().toString());
status.put("version", "1.0.0-mvp");
return status;
}
@GetMapping("/metrics")
public Map<String, Object> metrics() {
Map<String, Object> m = new HashMap<>();
m.put("tasks_created_total", tasksCreated.get());
m.put("tasks_completed_total", tasksCompleted.get());
m.put("uptime_ms", ManagementFactory
.getRuntimeMXBean().getUptime());
return m;
}
public void recordTaskCreated() {
tasksCreated.incrementAndGet();
}
public void recordTaskCompleted() {
tasksCompleted.incrementAndGet();
}
}
The /health endpoint tells you the app is alive. The /metrics endpoint tells you whether anyone is using it. Here is the same concept in Flask:
import time
import logging
from flask import Flask, jsonify
app = Flask(__name__)
start_time = time.time()
metrics = {
"tasks_created": 0,
"tasks_completed": 0,
"errors": 0
}
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s"
)
logger = logging.getLogger(__name__)
@app.route("/health")
def health():
return jsonify({
"status": "UP",
"version": "1.0.0-mvp",
"uptime_seconds": round(time.time() - start_time, 2)
})
@app.route("/metrics")
def get_metrics():
return jsonify(metrics)
def record_task_created():
metrics["tasks_created"] += 1
logger.info("Task created. Total: %d", metrics["tasks_created"])
def record_task_completed():
metrics["tasks_completed"] += 1
logger.info("Task completed. Total: %d", metrics["tasks_completed"])
Both examples follow the same principle: expose the minimum operational data you need to know if the system is healthy and if users are engaging with it.
You shipped. Now pay attention. An MVP without measurement is just guessing with extra steps. Track these metrics from day one:
You do not need a fancy analytics platform. The metrics endpoints above, combined with basic logging, will tell you what you need to know. If you want more, add a lightweight tool like PostHog. Avoid building a custom analytics system — that is scope creep.
Data tells you what is happening. Users tell you why. You need both.
Talk to your users now, while they are using the MVP. Ask three questions:
Combine qualitative feedback with your metrics. If users say they love the app but daily active users are declining, something is wrong that they are not telling you. If signups are strong but task creation is low, the onboarding flow is broken.
This is the Build-Measure-Learn loop from Lean Startup in practice. Every cycle makes the product better — not because you guessed what users wanted, but because you watched them use it and listened.
“If you’re not embarrassed by the first version of your product, you’ve launched too late.” — Reid Hoffman, co-founder of LinkedIn.
This is not an excuse to ship garbage. It is a reminder that perfect is the enemy of shipped. Your MVP will have rough edges. The design will not be polished. There will be features you wish you had included. That is the point. You are not building the final product — you are building the first product that lets you learn what the final product should be.
Ship small. Measure everything. Listen to users. In the next post, we cover Iterating — where we take everything we learned from the MVP and build the next version.
You have written the code. Features are implemented. The task management app has endpoints, services, and a database schema. Now comes the phase that separates professional software from hobby projects: testing.
This is post #9 in the How It Works series. We are not doing a deep dive into testing techniques — there is a separate Test Coverage post in the Best Practices series for that. This post is about testing as a phase in the development lifecycle.
Testing is not just writing unit tests and calling it a day. It is a deliberate phase where you verify the entire system works as designed. You are answering one question: does this software do what we said it would do?
During implementation, you focus on making things work. During testing, you focus on proving they work — and finding where they do not. Implementation is creative. Testing is adversarial. You are trying to break your own work.
When teams skip a dedicated testing phase, they ship bugs to production. Every time. In our task management app, we built features like creating tasks, assigning them, and marking them complete. The testing phase is where we verify all of that actually works — individually, together, and under real-world conditions.
Not all tests are created equal. The test pyramid gives you a framework for how many of each type to write:
The shape matters. Invert it — lots of E2E tests, few unit tests — and your suite will be slow, fragile, and painful to maintain.
Unit tests verify business logic in isolation. No database, no network. For our task app, when a task is created it should have a TODO status, a timestamp, and the provided title. Let us test that.
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.BeforeEach;
import static org.junit.jupiter.api.Assertions.*;
import java.time.LocalDateTime;
class TaskServiceTest {
private TaskService taskService;
@BeforeEach
void setUp() {
TaskRepository mockRepo = new InMemoryTaskRepository();
taskService = new TaskService(mockRepo);
}
@Test
void createTask_shouldSetDefaultFields() {
TaskRequest request = new TaskRequest("Build login page", "Implement OAuth2 login flow");
Task task = taskService.createTask(request);
assertNotNull(task.getId());
assertEquals("Build login page", task.getTitle());
assertEquals(TaskStatus.TODO, task.getStatus());
assertNotNull(task.getCreatedAt());
assertTrue(task.getCreatedAt().isBefore(LocalDateTime.now().plusSeconds(1)));
}
@Test
void createTask_shouldRejectEmptyTitle() {
TaskRequest request = new TaskRequest("", "Some description");
assertThrows(IllegalArgumentException.class, () -> {
taskService.createTask(request);
});
}
}
This test runs in milliseconds. It verifies that creating a task sets the right defaults, and that the service rejects invalid input.
import pytest
from datetime import datetime
from task_service import TaskService
from task_repository import InMemoryTaskRepository
@pytest.fixture
def task_service():
repo = InMemoryTaskRepository()
return TaskService(repo)
def test_create_task_sets_default_fields(task_service):
task = task_service.create_task(
title="Build login page",
description="Implement OAuth2 login flow"
)
assert task.id is not None
assert task.title == "Build login page"
assert task.status == "TODO"
assert isinstance(task.created_at, datetime)
def test_create_task_rejects_empty_title(task_service):
with pytest.raises(ValueError, match="Title cannot be empty"):
task_service.create_task(title="", description="Some description")
Same logic, different language. Set up a service with a fake repository, call the method, verify the result.
Unit tests prove your logic is correct. Integration tests prove your components work together — real HTTP requests, real database queries, real serialization.
For our task app, an integration test starts the application, sends an HTTP request to create a task, and verifies it is stored correctly. In Spring Boot you use @SpringBootTest with TestRestTemplate. In Flask you use the test client.
These tests are slower but catch issues unit tests cannot see: wrong JSON field names, database constraint violations, incorrect HTTP status codes, and misconfigured authentication.
E2E tests simulate a real user. For our task app: user logs in, creates a task, assigns it to a teammate, the teammate marks it complete, and we verify the status updates correctly.
These run against the full deployed application — frontend, backend, database, everything. The tools have gotten very good:
E2E tests are expensive to maintain and can be flaky. Write them for your critical user paths and rely on unit and integration tests for everything else.
Automated tests are essential, but they cannot catch everything.
Exploratory testing means using the app without a script — resizing windows, pasting special characters, clicking rapidly, hitting the back button. Testers do things automated scripts would never think to do.
Edge case testing targets boundaries. What happens with a 10,000-character title? What if two users edit the same task simultaneously?
Usability testing asks not “does it work?” but “is it confusing?” A feature can be technically correct but poorly designed. On professional teams, a human being should use the software before it ships.
“If it is not tested, it is broken.”
Untested code has bugs — you just have not found them yet. The best developers write tests as they build. You write a function, you write a test. You add an endpoint, you add an integration test. Testing is part of implementation, not separate from it.
The testing “phase” is about going beyond what you tested while building — running the full suite, doing E2E testing, and bringing in QA. But the foundation should already exist by the time you reach this phase. Build the habit early.
You have your requirements documented, your system designed, and your API contracts agreed upon. Now it is time to build. But implementation is not just “start coding.” It is a disciplined process that separates professional engineers from hobbyists. In this post, we walk through how to turn a design into working software, using our running example: building a task management app (like a simplified Trello).
Before writing a single line of business logic, establish your foundation. A clean project setup saves you hundreds of hours over the life of a project.
Set up four things immediately:
.gitignore for your language. Make your first commit the empty project skeleton.A clean project structure for our task management app looks like this:
task-manager/ ├── src/ │ ├── main/java/com/taskmanager/ │ │ ├── domain/ # Entities and value objects │ │ ├── service/ # Business logic │ │ ├── controller/ # REST endpoints │ │ ├── repository/ # Data access │ │ └── config/ # App configuration │ └── test/java/com/taskmanager/ ├── docker-compose.yml ├── Dockerfile ├── .gitignore └── README.md
This structure tells any developer exactly where to find things. Domain logic is isolated. Controllers are thin. Configuration is centralized. This is not accidental — it is intentional design.
The most important rule of implementation: build your core business logic first, independent of any framework. Your domain layer should work without Spring Boot, without Flask, without a database. It is pure logic.
Why? Because business rules change slowly. Frameworks change fast. If your business logic is tangled with your framework, every upgrade becomes a rewrite.
Here is the TaskService for our task management app in Java:
@Service
public class TaskService {
private final TaskRepository taskRepository;
public TaskService(TaskRepository taskRepository) {
this.taskRepository = taskRepository;
}
public Task createTask(String title, String description, Long assigneeId) {
if (title == null || title.isBlank()) {
throw new IllegalArgumentException("Task title cannot be empty");
}
Task task = new Task();
task.setTitle(title.trim());
task.setDescription(description);
task.setAssigneeId(assigneeId);
task.setStatus(TaskStatus.TODO);
task.setCreatedAt(LocalDateTime.now());
return taskRepository.save(task);
}
public Task getTask(Long id) {
return taskRepository.findById(id)
.orElseThrow(() -> new TaskNotFoundException("Task not found: " + id));
}
public Task updateStatus(Long id, TaskStatus newStatus) {
Task task = getTask(id);
validateStatusTransition(task.getStatus(), newStatus);
task.setStatus(newStatus);
task.setUpdatedAt(LocalDateTime.now());
return taskRepository.save(task);
}
private void validateStatusTransition(TaskStatus current, TaskStatus next) {
if (current == TaskStatus.DONE && next == TaskStatus.TODO) {
throw new InvalidStatusTransitionException(
"Cannot move a completed task back to TODO"
);
}
}
}
And the same logic in Python using Flask:
from datetime import datetime
from enum import Enum
class TaskStatus(Enum):
TODO = "TODO"
IN_PROGRESS = "IN_PROGRESS"
DONE = "DONE"
class TaskService:
def __init__(self, task_repository):
self.task_repository = task_repository
def create_task(self, title, description=None, assignee_id=None):
if not title or not title.strip():
raise ValueError("Task title cannot be empty")
task = {
"title": title.strip(),
"description": description,
"assignee_id": assignee_id,
"status": TaskStatus.TODO.value,
"created_at": datetime.utcnow().isoformat()
}
return self.task_repository.save(task)
def get_task(self, task_id):
task = self.task_repository.find_by_id(task_id)
if not task:
raise LookupError(f"Task not found: {task_id}")
return task
def update_status(self, task_id, new_status):
task = self.get_task(task_id)
self._validate_status_transition(task["status"], new_status)
task["status"] = new_status
task["updated_at"] = datetime.utcnow().isoformat()
return self.task_repository.save(task)
def _validate_status_transition(self, current, new_status):
if current == TaskStatus.DONE.value and new_status == TaskStatus.TODO.value:
raise ValueError("Cannot move a completed task back to TODO")
Notice what both implementations share: input validation, clear error handling, and a status transition rule that enforces business logic. The framework is irrelevant. The logic is identical. That is the sign of a well-designed domain layer.
With your domain layer solid, the API layer becomes a thin wrapper. Controllers should do three things: accept the request, call the service, and return the response. Nothing more.
Here is a Spring Boot controller for the task endpoints we designed in the previous post:
@RestController
@RequestMapping("/api/v1/tasks")
public class TaskController {
private final TaskService taskService;
public TaskController(TaskService taskService) {
this.taskService = taskService;
}
@PostMapping
public ResponseEntity<Task> createTask(@RequestBody CreateTaskRequest request) {
Task task = taskService.createTask(
request.getTitle(),
request.getDescription(),
request.getAssigneeId()
);
return ResponseEntity.status(HttpStatus.CREATED).body(task);
}
@GetMapping("/{id}")
public ResponseEntity<Task> getTask(@PathVariable Long id) {
return ResponseEntity.ok(taskService.getTask(id));
}
@PatchMapping("/{id}/status")
public ResponseEntity<Task> updateStatus(
@PathVariable Long id,
@RequestBody UpdateStatusRequest request) {
Task task = taskService.updateStatus(id, request.getStatus());
return ResponseEntity.ok(task);
}
}
The controller is deliberately thin. There is no business logic here. No validation beyond what Spring handles automatically. Every decision lives in TaskService where it can be tested independently.
Code without version control discipline is a liability. Every professional team follows a branching strategy. Here is a practical workflow that scales from two developers to two hundred:
# Create a feature branch from main git checkout main git pull origin main git checkout -b feature/task-status-update # Make small, focused commits as you work git add src/main/java/com/taskmanager/service/TaskService.java git commit -m "Add status transition validation to TaskService" git add src/test/java/com/taskmanager/service/TaskServiceTest.java git commit -m "Add tests for status transition rules" # Push and open a pull request git push origin feature/task-status-update # After code review and CI passes, merge via pull request # Delete the feature branch after merge git branch -d feature/task-status-update
The key rules:
Writing code that works is the minimum bar. Writing code that others can read, maintain, and extend is the real job. Follow these practices consistently:
Use meaningful names. validateStatusTransition is clear. check is not. A good name eliminates the need for a comment.
Keep functions small. Each function should do one thing. If you need to scroll to read a function, it is too long. The TaskService above has four methods, each under fifteen lines. That is intentional.
Follow SOLID principles. Single Responsibility means your TaskService handles task logic and nothing else. It does not send emails, generate reports, or manage users. Each of those gets its own service.
Handle errors explicitly. Notice that TaskService throws specific exceptions: TaskNotFoundException, InvalidStatusTransitionException. Never swallow exceptions silently. Never return null when you mean “not found.”
Write tests alongside your code. Not after. Not “when you have time.” Every method you write should have a corresponding test before you move to the next method. Tests are not optional — they are part of the implementation.
There is a well-known principle in software engineering: “Make it work, make it right, make it fast” — in that order.
Make it work means get a functioning solution that passes your tests. Do not worry about elegance or performance. Just make it correct.
Make it right means refactor for clarity. Clean up names, extract methods, remove duplication. This is where SOLID principles and clean code practices come in.
Make it fast means optimize, but only where you have measured a bottleneck. Premature optimization is the root of countless engineering disasters. Profile first, then optimize the hotspot.
Most engineers skip step two and jump straight to step three. They end up with fast code that no one can read or maintain. Resist the urge. Readable code that performs adequately will always beat clever code that no one can debug at two in the morning.
Implementation is where engineering discipline matters most. Set up your project cleanly. Build the domain layer first. Keep your controllers thin. Use version control like a professional. Write code that your future self will thank you for. Follow the mantra: make it work, make it right, make it fast. In that order, always.
March 19, 2020In the previous post, we turned our task management app idea into wireframes. Now it is time to turn those wireframes into something real. This is where you define how the system works, not just how it looks.
Wireframes show structure — where elements go on the page. Design answers everything else: visual identity, interaction patterns, error states, and the system architecture underneath.
For our task app, the wireframes showed boards, columns, and draggable cards. Now we answer the hard questions: How does the frontend talk to the backend? What does the data look like? What happens when two users drag the same card?
Design runs on two parallel tracks:
Both must stay in sync. A beautiful drag-and-drop UI backed by an API that cannot handle reordering is a design failure.
Before writing code, define the high-level architecture. Our task app uses a classic client-server pattern with a REST API layer:
Frontend (React SPA) --HTTPS/JSON--> API Gateway (Nginx) --HTTP--> Backend (REST API) --SQL--> Database (PostgreSQL)
| Component | Responsibility | Technology |
|---|---|---|
| Frontend | UI rendering, user interactions, local state | React, TypeScript |
| API Gateway | Routing, rate limiting, SSL, authentication | Nginx or AWS ALB |
| Backend | Business logic, validation, authorization | Spring Boot or Flask |
| Database | Persistent storage, data integrity | PostgreSQL |
The API contract is the agreement between frontend and backend. Design it before implementation so both teams can work in parallel.
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/v1/boards | List all boards for the authenticated user |
| POST | /api/v1/boards | Create a new board |
| GET | /api/v1/boards/{id}/tasks | List all tasks on a board |
| POST | /api/v1/tasks | Create a new task |
| PUT | /api/v1/tasks/{id} | Update a task |
| DELETE | /api/v1/tasks/{id} | Delete a task |
| GET | /api/v1/users/me | Get current user profile |
Here is the Spring Boot controller contract defining these endpoints:
// TaskController.java — API contract for task management
@RestController
@RequestMapping("/api/v1")
public class TaskController {
public record TaskRequest(
String title, String description,
String status, Long columnId, Long assigneeId
) {}
public record TaskResponse(
Long id, String title, String description,
String status, String columnName, String assigneeName,
LocalDateTime createdAt, LocalDateTime updatedAt
) {}
@GetMapping("/boards/{boardId}/tasks")
public ResponseEntity<List<TaskResponse>> getTasksByBoard(
@PathVariable Long boardId) {
return ResponseEntity.ok(taskService.findByBoard(boardId));
}
@PostMapping("/tasks")
public ResponseEntity<TaskResponse> createTask(
@RequestBody @Valid TaskRequest request) {
return ResponseEntity.status(HttpStatus.CREATED)
.body(taskService.create(request));
}
@PutMapping("/tasks/{id}")
public ResponseEntity<TaskResponse> updateTask(
@PathVariable Long id,
@RequestBody @Valid TaskRequest request) {
return ResponseEntity.ok(taskService.update(id, request));
}
@DeleteMapping("/tasks/{id}")
public ResponseEntity<Void> deleteTask(@PathVariable Long id) {
taskService.delete(id);
return ResponseEntity.noContent().build();
}
}
The same contract in Flask with Python dataclasses:
# task_controller.py — API contract for task management
from dataclasses import dataclass, asdict
from datetime import datetime
from flask import Flask, request, jsonify
app = Flask(__name__)
@dataclass
class TaskRequest:
title: str
description: str
status: str
column_id: int
assignee_id: int
@dataclass
class TaskResponse:
id: int
title: str
description: str
status: str
column_name: str
assignee_name: str
created_at: datetime
updated_at: datetime
@app.route("/api/v1/boards/<int:board_id>/tasks", methods=["GET"])
def get_tasks_by_board(board_id):
tasks = task_service.find_by_board(board_id)
return jsonify([asdict(t) for t in tasks]), 200
@app.route("/api/v1/tasks", methods=["POST"])
def create_task():
data = TaskRequest(**request.get_json())
return jsonify(asdict(task_service.create(data))), 201
@app.route("/api/v1/tasks/<int:task_id>", methods=["PUT"])
def update_task(task_id):
data = TaskRequest(**request.get_json())
return jsonify(asdict(task_service.update(task_id, data))), 200
@app.route("/api/v1/tasks/<int:task_id>", methods=["DELETE"])
def delete_task(task_id):
task_service.delete(task_id)
return "", 204
Both implementations define the same contract. The DTOs enforce data shape. The HTTP methods and status codes follow REST conventions. This is the specification both teams work against.
The database schema is the foundation everything rests on. Here is the schema for our task app:
| Table | Column | Type | Constraint |
|---|---|---|---|
| users | id, email, display_name, created_at | BIGINT, VARCHAR, VARCHAR, TIMESTAMP | PK, UNIQUE NOT NULL, NOT NULL, DEFAULT NOW() |
| boards | id, name, owner_id, created_at | BIGINT, VARCHAR, BIGINT, TIMESTAMP | PK, NOT NULL, FK -> users(id), DEFAULT NOW() |
| columns | id, name, board_id, position | BIGINT, VARCHAR, BIGINT, INTEGER | PK, NOT NULL, FK -> boards(id), NOT NULL |
| tasks | id, title, description, column_id, assignee_id, position, created_at, updated_at | BIGINT, VARCHAR, TEXT, BIGINT, BIGINT, INTEGER, TIMESTAMP, TIMESTAMP | PK, NOT NULL, NULLABLE, FK -> columns(id), FK -> users(id), NOT NULL, DEFAULT NOW(), DEFAULT NOW() |
Key relationships: A user owns many boards. A board has many columns. A column has many tasks. A task belongs to one column and one assignee. The position field enables drag-and-drop reordering — move a card and you update the position integers.
The UI/UX design phase turns wireframes into pixel-perfect mockups. Build a design system — a shared vocabulary of reusable components:
Tools like Figma and Adobe XD let designers and developers collaborate on the same file, inspect spacing, and export assets.
Critical principle: design for states, not just screens. Every component needs empty, loading, error, and populated state designs.
A prototype is a clickable, interactive mockup that stakeholders can experience without production code. Click “Create Task” and a modal opens. Drag a card between columns. Navigate between boards.
Prototyping serves two purposes:
Figma’s prototyping mode lets you define click targets, transitions, and flows directly on design frames. Remember: a prototype is not production code. It is a communication tool.
“Design is not how it looks, it’s how it works.” — Steve Jobs
Every hour spent on design saves five hours of rework during implementation. Focus on user experience over aesthetics. A plain-looking app with clear navigation and predictable behavior beats a stunning app that confuses users. Get the API contracts right. Get the data model right. The colors are the easy part.
In the next post, we start building. That is where implementation begins.