Service Design & Modularity: Keep the Workflow Simple
The Problem: Tangled Workflows
In many codebases, a single request triggers a maze of conditional logic. The workflow constantly branches based on who the request is for (e.g., Provider A vs. B), what is being done (Task X vs. Y), and how it’s executed (Sync vs. Async).
When a workflow’s structure changes at every branch, it creates friction:
- Where was this payload mutated?
- Why did this request bypass validation?
- Where do I inject logic for a new provider?
The more conditionals you weave into the main execution path, the harder the system is to debug and maintain.
The Solution: A Linear, Standardized Pipeline
Stop building bespoke paths for different requests. Instead, build one main pipeline with a fixed number of hops.
- Entry: The protocol layer (HTTP, gRPC, queue consumer) receives the request and normalizes the payload.
- Validate: Assert required fields and types. Fail fast on bad input.
- Route: Determine which handler will process the request via a lookup or factory—not a giant
switchstatement. - Execute: Pass the payload to the chosen handler.
- Respond: Format and return/publish the result consistently.
The Golden Rule: The sequence of steps never changes. Only the handler chosen in step 3 changes. Debugging becomes “Which hop am I in?” instead of “Which nested conditional am I in?”
Isolate Variation (The Strategy Pattern)
Different integrations require different request schemas, API endpoints, and response formatting. Do not weave these variations into your pipeline. Instead, keep the pipeline rigid and delegate the variation to individual handlers.
- Routing: Look up the handler dynamically (e.g., by
provider_id+task_type). - One Contract: Every handler must implement the exact same interface (e.g.,
Handler.execute(payload)). - Polymorphism over Conditionals: Whether a task is synchronous or asynchronous should be reflected in the handler’s return type (e.g., returning a
Resultvs. returning anAcceptedtoken), not by branching the pipeline itself.
To add a new provider, you simply write a new handler that satisfies the contract. The core pipeline code remains untouched.
Prefer Config Over Code
If the only difference between two flows is the API endpoint, a timeout value, or a specific payload shape, extract that into configuration files.
- Code should dictate flow and structure.
- Config should dictate data and environment specifics.
Your handlers should use shared helper functions to build requests based on configuration keys, preventing duplicated code for slight payload variations.
Strict Separation of Concerns
Assign strict boundaries to prevent logic bleed:
- The Pipeline (Orchestrator): Validates, routes, and catches errors. Knows nothing about provider-specific business logic.
- The Handler (Business Logic): Owns the full workflow for a specific task. Builds requests, formats responses, and handles async callbacks. Knows nothing about routing.
- The Client (Transport): Sends data over the wire (HTTP/AMQP) and parses raw network responses. Knows nothing about the business context.
When a malformed body is sent to an external API, you know exactly where to look: the specific handler or its config, not the pipeline or the HTTP client.
TL;DR
| Do | Avoid |
|---|---|
| Linear Pipelines: One fixed path with clear steps. | Branching: Nested if/else logic in the main execution flow. |
| Strategy Pattern: Same steps, different injected handlers. | Bespoke Workflows: Different structural flow per provider/task. |
Uniform Contracts: E.g., all handlers use execute(params). | Fractured Entrypoints: Different methods for different task “types.” |
| Config-Driven Data: Endpoints and shapes in config. | Hardcoded Differences: Inline conditionals for payload building. |
| Strict Boundaries: Pipeline = Route; Handler = Logic; Client = Transport. | God Classes: Mixing routing, logic, and networking in one layer. |