Technology Web Development

Litestar DTOs: The 'Advanced Feature' I Didn't Need (And When You Might)

I thought I was doing it wrong by not using DTOs. Turns out, sometimes the simple approach is exactly right—here's how to know which to choose

The Moment of Self-Doubt

I was knee-deep in my FastAPI to Litestar migration, feeling pretty good about myself. I’d just wrapped my head around msgspec and was happily writing explicit schema conversions. Everything was working beautifully.

Then I made the mistake of looking at Litestar’s full-stack example repository.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
class UserDTO(SQLAlchemyDTO[User]):
    config = DTOConfig(
        exclude={"password_hash", "sessions", "oauth_accounts"},
        rename_strategy="camel",
        max_nested_depth=2,
    )

@get("/users/{user_id}", return_dto=UserDTO)
async def get_user(self, user_id: UUID) -> User:
    return await user_service.get(user_id)

Wait. What?

The controller just returns the raw SQLAlchemy model? No manual conversion? No explicit schema class? Just… return the model and some magic DTO thing handles everything?

And everyone in the examples was using it. Every. Single. Endpoint.

That familiar developer anxiety crept in: Am I doing this wrong?

My “Primitive” Approach

Here’s what I’d been doing, blissfully unaware I might be committing some architectural sin:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# Define explicit response schemas
class UserResponse(CamelizedBaseStruct):
    id: UUID
    email: str
    full_name: str | None = None
    is_admin: bool = False

# Manual conversion in controllers
@get("/profile")
async def profile(self, current_user: AppUser) -> UserResponse:
    return UserResponse(
        id=current_user.id,
        email=current_user.email,
        full_name=current_user.full_name,
        is_admin=current_user.is_admin,
    )

It worked. It was clear. I could see exactly what data was being exposed.

But now I was looking at this DTO thing thinking: “Should I be using that instead? Is my code… amateur hour?”

The Investigation Begins

I did what any self-respecting developer does when feeling inadequate: I asked ChatGPT.

“Should I be using Litestar’s DTO system instead of explicit msgspec schemas?”

The conversation that followed was enlightening. It wasn’t a simple “yes” or “no”—it was a “depends on what you’re building.”

That’s when I realized I needed to actually understand what DTOs are and what problem they solve. Because apparently, not every “advanced feature” is automatically better.

What Even IS a DTO?

DTO stands for Data Transfer Object. In Litestar’s context, it’s a transformation layer between your internal data models (SQLAlchemy, Pydantic, msgspec) and what you send/receive over the wire.

Think of it as a smart template system. You define transformation rules once, and Litestar applies them automatically:

  • “Exclude these sensitive fields”
  • “Rename snake_case to camelCase”
  • “Only include these specific fields”
  • “Serialize nested relationships up to depth 2”

Instead of writing manual conversion code in every endpoint, you configure the DTO once and attach it to your routes.

The key insight: DTOs are platform-agnostic. They work with Pydantic models, msgspec structs, dataclasses, and SQLAlchemy models through different backends (PydanticDTO, MsgspecDTO, SQLAlchemyDTO). The concept remains the same regardless of which you use.

But here’s what I was really wondering: Do I actually need this?

The Side-by-Side Reality Check

Let me show you what I was comparing. Here’s a real endpoint from my auth system:

My Current Approach (Explicit Schemas)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# schemas.py - Define exactly what to expose
class LoginRequest(CamelizedBaseStruct):
    email: str
    password: str

class UserResponse(CamelizedBaseStruct):
    id: UUID
    email: str
    full_name: str | None = None
    is_admin: bool = False

class LoginResponse(CamelizedBaseStruct):
    access_token: str
    token_type: str = "Bearer"
    expires_in: int
    user: UserResponse

# controller.py
@post("/login")
async def login(
    self, 
    data: LoginRequest,
    user_service: UserService,
) -> LoginResponse:
    user = await user_service.authenticate(data.email, data.password)
    token = create_access_token(user.id)
    
    return LoginResponse(
        access_token=token,
        expires_in=3600,
        user=UserResponse(
            id=user.id,
            email=user.email,
            full_name=user.full_name,
            is_admin=user.is_admin,
        ),
    )

Line count: Clear, explicit, about 35 lines total.

The DTO Approach

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
# schemas.py - Configure transformation rules
class UserResponseDTO(SQLAlchemyDTO[AppUser]):
    config = DTOConfig(
        exclude={"password_hash", "sessions", "oauth_accounts", "credit_balance"},
        rename_strategy="camel",
    )

class LoginRequestDTO(MsgspecDTO[LoginRequest]):
    config = DTOConfig(rename_strategy="camel")

# controller.py
@post("/login", data=LoginRequestDTO, return_dto=UserResponseDTO)
async def login(
    self,
    data: DTOData[LoginRequest],
    user_service: UserService,
) -> AppUser:
    request = data.create_instance()
    user = await user_service.authenticate(request.email, request.password)
    # ... token creation
    return user  # DTO handles conversion

Looking at these side-by-side, the DTO version seemed… more complex? Not simpler?

For my use case:

  • 4 fields in the request
  • 4 fields in the user response
  • Different fields in each response type (login vs profile vs admin)

The DTO configuration wasn’t saving me any code. If anything, it was adding abstraction for no clear benefit.

The Lightbulb Moment

Then ChatGPT gave me an example that made everything click.

“Imagine you have a User model with 30 fields, and you have 10 different endpoints that return users with only slight variations.”

Oh.

Without DTOs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
class UserListResponse(CamelizedBaseStruct):
    id: UUID
    email: str
    username: str
    full_name: str
    # ... 26 more fields
    created_at: datetime
    updated_at: datetime

class UserDetailResponse(CamelizedBaseStruct):
    id: UUID
    email: str
    username: str
    full_name: str
    # ... 26 more fields (SAME AS ABOVE)
    created_at: datetime
    updated_at: datetime
    last_login_at: datetime  # ONE extra field
    login_count: int          # ONE extra field

class UserAdminResponse(CamelizedBaseStruct):
    id: UUID
    email: str
    username: str
    full_name: str
    # ... 26 more fields (SAME AS ABOVE AGAIN)
    created_at: datetime
    updated_at: datetime
    last_login_at: datetime
    login_count: int
    password_hash: str  # Admin can see this

You’d be copying and pasting 28 fields across three schemas. If you need to add a new field or rename one, you’d have to update it in three places. Error-prone. Painful during refactoring.

With DTOs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# Define base fields once via the model
# Then configure variations

class UserListDTO(SQLAlchemyDTO[User]):
    config = DTOConfig(
        exclude={"password_hash", "last_login_at", "login_count"},
        rename_strategy="camel",
    )

class UserDetailDTO(SQLAlchemyDTO[User]):
    config = DTOConfig(
        exclude={"password_hash"},
        rename_strategy="camel",
    )

class UserAdminDTO(SQLAlchemyDTO[User]):
    config = DTOConfig(
        rename_strategy="camel",  # Include everything
    )

Ah. Now I see it.

DTOs shine when you have large schemas with small variations. Instead of copying 30 fields and playing “spot the difference” during code reviews, you define transformations: “include everything except X” or “exclude only Y.”

But my auth endpoints? 4-8 fields, completely different shapes. No duplication to eliminate. No variations to configure.

When DTOs Actually Make Sense

Once I understood the problem DTOs solve, I could see exactly when they’d be valuable:

1. Large Schemas with Minor Variations

Perfect for:

  • User models with 20+ fields
  • Product catalogs with extensive metadata
  • Admin panels with many similar CRUD endpoints
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# One model, many views
class ProductListDTO(SQLAlchemyDTO[Product]):
    config = DTOConfig(
        exclude={"internal_cost", "supplier_details", "inventory_history"},
    )

class ProductDetailDTO(SQLAlchemyDTO[Product]):
    config = DTOConfig(
        exclude={"internal_cost", "supplier_details"},  # Show more
    )

class ProductAdminDTO(SQLAlchemyDTO[Product]):
    config = DTOConfig()  # Show everything

Instead of defining 30 fields three times, you configure what to exclude.

2. Complex Nested Relationships

When you have models that reference other models:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# Model structure
class Case(Base):
    id: UUID
    name: str
    user: Mapped[User]  # Relationship
    documents: Mapped[list[Document]]  # Relationship
    workflow_task: Mapped[WorkflowTask]  # Relationship

# Without DTO - manual nesting
class DocumentResponse(CamelizedBaseStruct):
    id: UUID
    filename: str

class WorkflowTaskResponse(CamelizedBaseStruct):
    id: UUID
    stage: str

class UserResponse(CamelizedBaseStruct):
    id: UUID
    email: str

class CaseResponse(CamelizedBaseStruct):
    id: UUID
    name: str
    user: UserResponse
    documents: list[DocumentResponse]
    workflow_task: WorkflowTaskResponse

# Manual conversion - tedious and error-prone
@get("/cases/{case_id}")
async def get_case(self, case_id: UUID) -> CaseResponse:
    case = await case_service.get(case_id)
    return CaseResponse(
        id=case.id,
        name=case.name,
        user=UserResponse(id=case.user.id, email=case.user.email),
        documents=[
            DocumentResponse(id=doc.id, filename=doc.filename)
            for doc in case.documents
        ],
        workflow_task=WorkflowTaskResponse(
            id=case.workflow_task.id,
            stage=case.workflow_task.stage,
        ),
    )

That’s a lot of manual mapping!

With DTOs:

1
2
3
4
5
6
7
8
9
class CaseDTO(SQLAlchemyDTO[Case]):
    config = DTOConfig(
        max_nested_depth=2,  # Auto-serialize relationships
        rename_strategy="camel",
    )

@get("/cases/{case_id}", return_dto=CaseDTO)
async def get_case(self, case_id: UUID) -> Case:
    return await case_service.get(case_id)  # DTO handles nesting

The DTO automatically serializes nested relationships. The resulting JSON preserves the structure:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
  "id": "123...",
  "name": "Smith v. Jones",
  "user": {
    "id": "456...",
    "email": "[email protected]",
    "fullName": "John Lawyer"
  },
  "documents": [
    {"id": "789...", "filename": "contract.pdf"}
  ],
  "workflowTask": {
    "id": "012...",
    "stage": "completed"
  }
}

For deeply nested structures, DTOs eliminate a LOT of boilerplate.

3. Consistent Transformations Across Many Endpoints

If you have 20 endpoints that all return the same data with the same transformations:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Without DTO - repeat rename logic everywhere
class UserResponse(CamelizedBaseStruct):
    id: UUID
    email: str
    full_name: str  # Manually renamed from fullName
    is_admin: bool  # Manually renamed from isAdmin

# With DTO - configure once
class UserDTO(SQLAlchemyDTO[User]):
    config = DTOConfig(
        rename_strategy="camel",  # Applies to all fields automatically
    )

If you later decide to use PascalCase instead of camelCase, you change one config line instead of 20 schema classes.

4. Bidirectional Use (Request + Response)

DTOs can handle both input validation and output serialization:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
class UserCreateDTO(SQLAlchemyDTO[User]):
    config = DTOConfig(
        include={"email", "password", "full_name"},  # Only these for creation
        rename_strategy="camel",
    )

class UserResponseDTO(SQLAlchemyDTO[User]):
    config = DTOConfig(
        exclude={"password_hash"},  # Don't expose sensitive fields
        rename_strategy="camel",
    )

@post("/users", data=UserCreateDTO, return_dto=UserResponseDTO)
async def create_user(self, data: DTOData[User]) -> User:
    user_data = data.as_builtins()  # Validated dict
    user = await user_service.create(user_data)
    return user  # Auto-converted to response

Same model, different “views” for input vs output.

The Feature Buffet: What DTOs Can Do

Now that I understood when to use DTOs, I explored what they could actually do. Here are the key capabilities (without getting into syntax details):

Field Control:

  • Exclude fields: Hide sensitive data like password_hash, internal_notes
  • Include only specific fields: Whitelist approach instead of blacklist
  • Partial models: Make all fields optional (useful for PATCH endpoints)

Transformations:

  • Rename strategy: Convert between snake_case, camelCase, PascalCase
  • Rename individual fields: Custom mappings for specific fields
  • Computed fields: Add calculated values not in the model

Relationship Handling:

  • Max nested depth: Control how deep relationship serialization goes
  • Circular reference handling: Prevent infinite loops in self-referential models

Validation:

  • Type safety: DTOData provides validated conversion helpers
  • Integration with model validators: Works with Pydantic/msgspec validation

The point isn’t to memorize all these options—it’s to recognize that DTOs are a configuration system for data transformation. When you have complex transformation needs across many endpoints, that configuration approach starts to pay off.

The Honest Trade-offs

After all this investigation, I could finally see both sides clearly.

When DTOs Win

Scenario: E-commerce admin panel with 50+ endpoints, User/Product/Order models with 25+ fields each

  • ✅ Less code duplication (define fields once)
  • ✅ Consistent transformations (one rename_strategy for all)
  • ✅ Safer refactoring (change model, DTOs adapt)
  • ✅ Automatic nested serialization (complex object graphs)
  • ✅ Centralized field exclusion (security by default)

Pain points: More abstraction, harder debugging when things go wrong, learning curve for DTOConfig

When Explicit Schemas Win

Scenario: Auth system with 8 endpoints, small distinct request/response shapes

  • ✅ Crystal clear what data is exposed (security-critical)
  • ✅ Simple debugging (no transformation layer)
  • ✅ Easy to understand (explicit is better than implicit)
  • ✅ Perfect for small, distinct schemas
  • ✅ Full control over every field

Pain points: Manual conversion code, potential field drift if you’re not careful, more code for large schemas

My Decision: Explicit Schemas (For Now)

For my auth system, the choice became obvious:

My schemas are:

  • Small (4-8 fields)
  • Distinct (login, register, profile all have different shapes)
  • Security-critical (I want to be VERY explicit about what’s exposed)

My team values:

  • Explicitness over magic
  • Simple debugging
  • Clear code over clever code

DTOs would add complexity without solving any problem I actually have.

But here’s the key realization: This isn’t a forever decision.

If I later build:

  • An admin dashboard with 30 similar CRUD endpoints
  • A reporting system with complex nested data
  • A public API with many variations of User/Product responses

Then DTOs would make perfect sense. I’ll know when I need them—when I find myself copy-pasting large schemas and playing “spot the difference” during code reviews.

The Real Lesson: Question the “Best Practices”

This whole journey taught me something more valuable than just when to use DTOs.

I started feeling inadequate because I wasn’t using an “advanced feature” everyone else seemed to be using. The example code used it, so clearly I was doing something wrong, right?

Wrong.

“Advanced features” aren’t better—they’re just tools for specific problems.

The Litestar examples use DTOs because they’re showing off the framework’s capabilities. They’re demonstrating what’s possible, not what’s mandatory. A full-stack example repository naturally has complex nested data and many similar endpoints—the perfect use case for DTOs.

But your codebase might be different. And that’s okay.

The best code isn’t the one that uses the most advanced features. It’s the one that:

  • Solves your actual problems
  • Your team can understand and maintain
  • Fits your specific context

Sometimes that means reaching for powerful abstraction layers like DTOs. Sometimes it means writing simple, explicit conversion code.

The skill isn’t in knowing all the tools—it’s in knowing which tool fits which problem.

How to Choose for Your Project

Here’s the decision framework I landed on:

Use Explicit Schemas When

  • Schemas are small (< 15 fields)
  • Each endpoint has distinct response shapes
  • Security is critical (auth, payments, PII)
  • Team is small or values explicitness
  • You’re building a simple CRUD API

Use DTOs When

  • Schemas are large (> 20 fields)
  • Many endpoints return similar data with slight variations
  • Complex nested relationships need serialization
  • You have 20+ endpoints with consistent transformations
  • You’re building an admin panel or complex dashboard
  • Field exclusion/security is error-prone without automation

Don’t Decide Yet If

You’re just starting the project. Build a few endpoints with explicit schemas first. If you find yourself copy-pasting large schemas and thinking “there has to be a better way,” that’s when you look into DTOs.

Don’t add abstraction until you feel the pain it’s meant to solve.

Where I Am Now

I’m still using explicit msgspec schemas for my auth system. No DTOs. No automatic transformations. Just clear, simple conversion code.

And I’m completely confident in that decision.

Not because DTOs are bad—they’re actually quite elegant for the right use case. But because I understand why they exist and when they help.

That moment of self-doubt when I saw the example code using DTOs? It turned into a learning opportunity. I don’t feel inadequate anymore. I feel informed.

When I eventually build that admin dashboard with 50 endpoints and complex nested data, I’ll reach for DTOs with confidence. I’ll know exactly why I’m using them and how to configure them effectively.

But for now? Explicit is exactly right.

The Takeaway

Next time you see an “advanced feature” in example code and wonder if you’re doing it wrong by not using it, remember:

  1. Understand the problem it solves - What pain point does this feature address?
  2. Evaluate if you have that problem - Do you actually need this solution?
  3. Consider the trade-offs - What are you gaining vs. what complexity are you adding?
  4. Choose what fits your context - There’s no one-size-fits-all “best practice”

Sometimes the simple approach is the right approach. And that’s not a failure—it’s good engineering judgment.

DTOs are a powerful tool for data transformation at scale. But power you don’t need is just complexity you have to maintain.

Know your tools. Understand their trade-offs. Choose wisely.


(Written by Human, improved using AI where applicable.)