Custom ESLint Rules, Architectural Decision Records, and Module Boundaries
Writing code that works today is one challenge; writing code that remains maintainable for years is another challenge entirely. This becomes especially important when working in teams where developers with different backgrounds, preferences, and experience levels collaborate on the same codebase. In this post, I’ll share battle-tested strategies for maintaining Node.js codebases that can scale with your team.
The True Cost of Unmaintainable Code
Before diving into solutions, let’s understand what’s at stake. Unmaintainable code leads to:
- Increased onboarding time: New developers spend weeks instead of days understanding the project
- Development paralysis: Changes that should take hours take days (or weeks)
- Knowledge silos: Only certain developers can work on certain parts of the codebase
- Bug breeding grounds: Inconsistent patterns create perfect environments for bugs
- Technical debt accumulation: Which compounds with each new feature
A project I joined once had these exact problems. A simple feature addition took three weeks instead of two days because the codebase was an intricate maze of inconsistent patterns, implicit dependencies, and undocumented decisions.
Strategy 1: Custom ESLint Rules – Automated Code Standards
ESLint is a powerful tool that goes far beyond catching syntax errors. By creating custom rules tailored to your project, you can enforce architectural constraints and team-specific best practices.
Example: Enforcing Layer Separation
Let’s say your application follows a layered architecture with controllers, services, and repositories. You want to ensure controllers never import repositories directly:
// custom-eslint-rules/no-repository-in-controller.js
module.exports = {
meta: {
type: "suggestion",
docs: {
description: "Disallow direct repository usage in controllers",
category: "Architecture",
recommended: true
},
fixable: null
},
create(context) {
return {
ImportDeclaration(node) {
const filename = context.getFilename();
// Check if we're in a controller file
if (filename.includes('/controllers/')) {
const importPath = node.source.value;
// Check if importing from repositories
if (importPath.includes('/repositories/')) {
context.report({
node,
message: "Controllers should not import repositories directly. Use services instead."
});
}
}
}
};
}
};
Example: Enforcing Module Boundaries
For larger applications with distinct modules or bounded contexts, ensure modules only communicate through well-defined interfaces:
// custom-eslint-rules/module-boundaries.js
module.exports = {
meta: {
type: "suggestion",
docs: {
description: "Enforce module boundaries",
category: "Architecture",
recommended: true
}
},
create(context) {
const modulesDependencyMap = {
'user': ['common'],
'payment': ['user', 'common'],
'notification': ['user', 'common'],
'common': []
};
return {
ImportDeclaration(node) {
const filename = context.getFilename();
const importPath = node.source.value;
// Determine current module
const currentModule = Object.keys(modulesDependencyMap).find(
module => filename.includes(`/modules/${module}/`)
);
if (currentModule) {
// Check if importing from another module
const importedModule = Object.keys(modulesDependencyMap).find(
module => importPath.includes(`/modules/${module}/`)
);
if (importedModule &&
importedModule !== currentModule &&
!modulesDependencyMap[currentModule].includes(importedModule)) {
context.report({
node,
message: `Module '${currentModule}' cannot import from '${importedModule}'. Allowed dependencies are: ${modulesDependencyMap[currentModule].join(', ')}`
});
}
}
}
};
}
};
Setting Up Custom Rules
To use these custom rules, add them to your ESLint configuration:
// .eslintrc.js
module.exports = {
// ... other ESLint config
plugins: [
// ... other plugins
"custom-rules"
],
rules: {
// ... other rules
"custom-rules/no-repository-in-controller": "error",
"custom-rules/module-boundaries": "error"
}
};
And create a plugin index file:
// eslint-plugin-custom-rules/index.js
module.exports = {
rules: {
"no-repository-in-controller": require("./rules/no-repository-in-controller"),
"module-boundaries": require("./rules/module-boundaries")
}
};
The beauty of this approach is that architectural violations are caught at development time, not during code review or worse, in production.
Strategy 2: Architectural Decision Records (ADRs)
Have you ever wondered why a certain pattern was used in a codebase, only to find no one on the team remembers the reasoning? Architectural Decision Records solve this problem by documenting important decisions in a structured format.
What is an ADR?
An ADR is a document that captures an important architectural decision, along with its context and consequences. It answers questions like:
- What was the decision?
- Why was it made?
- What alternatives were considered?
- What were the trade-offs?
Example ADR: Adopting the Repository Pattern
# ADR-0003: Adopt Repository Pattern for Data Access
## Status
Accepted
## Context
Our application needs to interact with multiple data sources (MongoDB, Redis, external APIs).
Currently, data access logic is mixed with business logic, making it difficult to:
- Write tests without hitting external systems
- Switch data sources when needed
- Maintain consistent error handling across data sources
## Decision
We will adopt the Repository pattern for all data access:
- Create repository interfaces defining data access methods
- Implement concrete repositories for each data source
- Repositories will handle data source-specific logic (queries, caching, etc.)
- Business logic will depend on repository interfaces, not implementations
## Alternatives Considered
1. **Direct data access in services**: Simpler implementation but harder to test and maintain.
2. **ORM-based approach**: Would work for databases but not for external APIs.
3. **Data Access Objects (DAOs)**: Similar to repositories but typically more focused on CRUD operations.
## Consequences
### Positive
- Improved testability through dependency injection
- Clear separation of concerns
- Consistent data access patterns across the application
- Easier to switch data sources when needed
### Negative
- Additional abstraction layer increases code complexity
- Might be overengineering for very simple applications
- Requires discipline from team members to maintain pattern consistency
## Implementation Notes
- Repository interfaces will be placed in `src/domain/repositories`
- Implementations will be in `src/infrastructure/repositories`
- We'll use dependency injection to provide concrete repositories to services
Managing ADRs
Store ADRs in your repository under a dedicated directory like docs/adr/
. Number them sequentially and link them in a table of contents file. Use Markdown format for easy viewing on GitHub/GitLab.
Some teams prefer to use tools like adr-tools to manage ADRs, which can automate creation, numbering, and linking.
When to Write an ADR
Write an ADR when making decisions that:
- Have significant impact on the codebase architecture
- Require team consensus
- Will be difficult to change later
- Introduce new patterns or technologies
- Deviate from established patterns
Strategy 3: Enforcing Module Boundaries
As Node.js applications grow, maintaining a clean architecture becomes increasingly challenging. Enforcing clear module boundaries helps prevent your codebase from turning into spaghetti.
Defining Module Boundaries
A module boundary defines:
- What a module exposes to the outside world
- What other modules it can depend on
- How other modules can interact with it
Example: Using Barrel Files to Control Exports
// modules/user/index.js - This is the public API of the user module
// Only export what should be accessible to other modules
// Public services
export { default as UserService } from './services/UserService';
export { default as AuthService } from './services/AuthService';
// Public models/types
export { User, UserRole } from './models/User';
// Public constants/enums
export { USER_EVENTS } from './constants';
// Do NOT export repositories, utilities, or internal services
// They are implementation details of this module
This approach makes it explicit what parts of a module are public API and which are internal implementation details.
Example: Enforcing Module Boundaries with TypeScript
For TypeScript projects, you can use path aliases and the baseUrl
configuration to make module boundaries even clearer:
// tsconfig.json
{
"compilerOptions": {
"baseUrl": "src",
"paths": {
"@app/user/*": ["modules/user/*"],
"@app/payment/*": ["modules/payment/*"],
"@app/notification/*": ["modules/notification/*"],
"@app/common/*": ["modules/common/*"]
}
}
}
Then, in your code:
// Proper import using the public API
import { UserService, User } from '@app/user';
// Incorrect import that bypasses the public API
// This should be caught by our custom ESLint rule
import { UserRepository } from '@app/user/repositories/UserRepository';
Physical Organization: Domain-Driven Structure
Consider organizing your code physically around business domains rather than technical layers:
src/
├── modules/
│ ├── user/
│ │ ├── controllers/
│ │ ├── services/
│ │ ├── repositories/
│ │ ├── models/
│ │ ├── utils/
│ │ └── index.js # Barrel file controlling exports
│ ├── payment/
│ │ ├── controllers/
│ │ ├── services/
│ │ └── ...
│ └── notification/
│ └── ...
├── shared/ # Truly cross-cutting concerns
│ ├── logging/
│ ├── errors/
│ └── validation/
└── app.js
This approach helps developers understand which code belongs to which business domain, making it easier to locate files and understand relationships.
Strategy 4: Interface Contracts and Documentation
Clear interfaces between modules are essential for maintainability. Document your interfaces thoroughly to make them easier to understand and use correctly.
Example: JSDocs for Interface Documentation
/**
* User Service - Manages user operations and authentication
* @interface UserService
*/
/**
* Creates a new user in the system
*
* @async
* @function createUser
* @param {Object} userData - User creation data
* @param {string} userData.email - User's email address
* @param {string} userData.password - User's password (will be hashed)
* @param {string} [userData.name] - User's full name (optional)
* @param {string} [userData.role=user] - User's role (defaults to 'user')
*
* @throws {ValidationError} If user data is invalid
* @throws {DuplicateError} If email already exists
*
* @returns {Promise<User>} The created user object
*
* @example
* try {
* const user = await userService.createUser({
* email: 'user@example.com',
* password: 'securepassword',
* name: 'John Doe'
* });
* console.log(`User created with ID: ${user.id}`);
* } catch (err) {
* if (err instanceof DuplicateError) {
* console.error('Email already in use');
* }
* }
*/
async function createUser(userData) {
// Implementation
}
This detailed documentation tells other developers:
- What the function does
- What parameters it accepts and their types
- What errors it might throw
- What it returns
- How to use it correctly (with an example)
Example: TypeScript Interfaces
For TypeScript projects, use interfaces to define clear contracts:
export interface UserService {
/**
* Creates a new user in the system
*
* @throws {ValidationError} If user data is invalid
* @throws {DuplicateError} If email already exists
*/
createUser(userData: UserCreationDto): Promise<User>;
/**
* Authenticates a user with email and password
*
* @throws {AuthenticationError} If credentials are invalid
*/
authenticate(email: string, password: string): Promise<AuthToken>;
/**
* Retrieves a user by ID
*
* @throws {NotFoundError} If user doesn't exist
*/
getUserById(id: string): Promise<User>;
// Other methods...
}
export interface UserCreationDto {
email: string;
password: string;
name?: string;
role?: UserRole;
}
export interface User {
id: string;
email: string;
name: string | null;
role: UserRole;
createdAt: Date;
updatedAt: Date;
}
export enum UserRole {
ADMIN = 'admin',
USER = 'user',
GUEST = 'guest'
}
TypeScript interfaces provide compile-time checks that ensure your code adheres to these contracts.
Strategy 5: Integration Tests as Living Documentation
Well-written integration tests serve as executable documentation that demonstrates how different parts of your system should work together.
Example: Testing Module Integration
// tests/integration/payment-processing.test.js
describe('Payment Processing Flow', () => {
it('should process a valid payment and send notification', async () => {
// Given
const user = await createTestUser();
const paymentDetails = {
amount: 100,
currency: 'USD',
paymentMethod: 'card',
cardToken: 'valid_test_token'
};
// When
const payment = await paymentService.processPayment(user.id, paymentDetails);
// Then
expect(payment.status).toBe('completed');
// Verify notification was sent
const notifications = await notificationRepository.findByUserId(user.id);
expect(notifications).toHaveLength(1);
expect(notifications[0].type).toBe('payment_confirmation');
expect(notifications[0].data.paymentId).toBe(payment.id);
});
it('should handle insufficient funds correctly', async () => {
// Given
const user = await createTestUser();
const paymentDetails = {
amount: 100,
currency: 'USD',
paymentMethod: 'card',
cardToken: 'insufficient_funds_token'
};
// When
const payment = await paymentService.processPayment(user.id, paymentDetails);
// Then
expect(payment.status).toBe('failed');
expect(payment.failureReason).toBe('insufficient_funds');
// Verify notification was sent
const notifications = await notificationRepository.findByUserId(user.id);
expect(notifications).toHaveLength(1);
expect(notifications[0].type).toBe('payment_failure');
});
});
These tests document the expected flow of data through your system and the interactions between modules.
Putting It All Together
Let’s look at how these strategies work together to create a maintainable codebase:
- Custom ESLint Rules enforce architectural patterns automatically during development
- Architectural Decision Records document why certain approaches were chosen
- Module Boundaries keep your codebase organized and prevent unwanted dependencies
- Interface Contracts define how modules interact with each other
- Integration Tests verify and document that everything works together correctly
Real-World Impact
On a recent project, we implemented these strategies for a team of 15 developers working on a complex Node.js application. The results were impressive:
- Onboarding time for new developers decreased from 3 weeks to 1 week
- Bug rate in new features decreased by approximately 30%
- Velocity increased as developers spent less time figuring out how things work
- Bus factor improved—no more knowledge silos with single owners
Conclusion
Writing maintainable Node.js code isn’t just about clean functions or following a style guide. It’s about creating a codebase that’s resilient to change, easy to understand, and accommodates multiple developers working together effectively.
The strategies outlined here—custom ESLint rules, Architectural Decision Records, proper module boundaries, clear interface contracts, and comprehensive tests—create a foundation for maintainable code that can evolve with your business needs.
Remember that maintainability is not a destination but a continuous practice. Regularly revisit your architectural decisions, refine your module boundaries, and update your documentation as your application evolves.
What strategies have helped you maintain large Node.js codebases? Share your experiences in the comments!
Leave a Reply