
A Practical Guide to Testing
Nov 22, 2025 • By Ege Uysal
Testing isn't glamorous. It's repetitive, it feels like extra work, and when you're trying to ship fast, it's tempting to skip it entirely. But here's the thing: once your app has multiple screens, edge cases, and real users depending on it, testing becomes non-negotiable.
I didn't start writing tests because of some catastrophic bug or dramatic failure. I started because I recognized a simple truth: different screens show different data, edge cases exist everywhere, and manually checking everything after each change doesn't scale. Testing is a tool that saves time in the long run, especially when combined with pre-commit hooks that catch issues before they ever reach production.
This guide covers everything you need to know about testing in Go and TypeScript: mocking dependencies, achieving meaningful coverage, and building a testing workflow that doesn't slow you down.
The Foundation: Why Test?
Testing serves three purposes: confidence, documentation, and safety. You know your code works as intended, your tests show how your code should behave, and you can refactor without breaking things.
The key is testing smart, not testing everything. As a solo developer building production apps, you need to be strategic about where you invest your testing effort.
Testing in Go: The Stack
For Go, I use a combination of three tools: the standard library testing package as the foundation, Testify for cleaner assertions and test suites, and GoMock for generating mocks when needed.
This combination gives you everything you need without over-complicating things.
Table-Driven Tests: The Go Way
Go developers love table-driven tests, and for good reason. They're clean, scalable, and easy to extend. You define a slice of test cases, each with a name, inputs, and expected outputs, then loop through them. This approach makes it trivial to add new test cases without duplicating code.
With Testify, you can make assertions even cleaner, replacing verbose if statements with simple assertion methods.
Mocking: Testing Without Dependencies
Mocking is essential when you're testing code that depends on external services: databases, APIs, file systems. You don't want your tests to fail because your database is down or an API is rate-limiting you.
Mocking Database Calls
When you're using PGX with SQLc for database operations, you typically define repository interfaces. Your service depends on these interfaces, which makes them perfect candidates for mocking.
To test business logic without hitting a real database, you create mock implementations of your repository interfaces. You set up expectations for what methods should be called with what arguments, then verify those expectations were met.
This approach lets you test scenarios like "what happens when a user already exists" or "what happens when the database returns an error" without actually needing those conditions to exist in a real database.
When to Mock vs When to Use Real Dependencies
Mock when you're testing external APIs, database logic in unit tests, time-dependent code, or file I/O operations.
Use real dependencies when testing simple pure functions, running integration tests, or testing database queries with a dedicated test database.
Testing Concurrent Code
Go's concurrency is powerful but tricky to test. The key is using channels and timeouts to ensure your goroutines complete as expected. You can launch goroutines in your tests and use select statements with timeout channels to verify they finish within reasonable time bounds.
Frontend Testing: Vitest and Playwright
For TypeScript and Next.js applications, I use two layers of testing: unit tests with Vitest and end-to-end tests with Playwright.
Unit Tests with Vitest
Vitest is fast and has great mocking capabilities. You can mock fetch calls, API responses, and any external dependencies. The key is testing your logic in isolation: does your function handle success cases correctly? Does it handle errors? Does it transform data as expected?
E2E Tests with Playwright
Playwright tests simulate real user behavior. You navigate to pages, fill in forms, click buttons, and verify that the right things happen. These tests catch integration issues that unit tests miss, like "does the login form actually redirect to the dashboard?"
Coverage: Aiming for 80%
I aim for 80% code coverage, and here's why: anything beyond that is usually overkill. Chasing 100% coverage means testing trivial code, getters and setters, and edge cases that will never happen in production.
Setting Up Coverage with GitHub Actions
I use GitHub Actions to run tests on every push and pull request, generating coverage reports that get uploaded to Codecov. Codecov gives you visual insights into which parts of your codebase are covered and which aren't. It integrates directly into pull requests, showing coverage changes before you merge.
My Testing Workflow
Here's how I approach testing in practice:
First, I write code until it works the way I want it to. Once I know the code does what I need, I write tests to lock in that behavior. Testing is repetitive, so I use AI to generate test cases faster. Tests run automatically before commits through pre-commit hooks to catch issues early. And I focus on business logic, skipping trivial components.
What to Test (and What to Skip)
Always test business logic like calculations, validation, and workflows. Test API endpoints, database operations, authentication and authorization, and edge cases with error handling.
Skip or test lightly on UI components unless they contain logic, simple getters and setters, third-party library integrations that you can assume work, and configuration files.
As a solo developer, you don't need Storybook or exhaustive component testing. Test what matters: the logic that makes your app work.
Common Testing Mistakes
Mistake 1: Chasing 100% Coverage
Don't do it. I made this mistake early on. You'll waste time testing trivial code that adds no value. 80% is the sweet spot.
Mistake 2: Testing Implementation Details
Test behavior, not implementation. If you refactor your code and tests break even though the behavior didn't change, your tests are too coupled to implementation.
Instead of testing internal state or private methods, test the public API and the observable behavior of your code.
Mistake 3: Ignoring Integration Tests
Unit tests are great, but they don't catch issues that happen when systems interact. Have at least a few integration tests that exercise your full stack.
Testing for SaaS: What Matters Most
If you're building your first production SaaS like I am with Ryva, here's what to prioritize.
Test heavily on payment flows because you can't afford bugs there, user authentication and authorization, data integrity especially in multi-tenant scenarios, and API endpoints that external users depend on.
Test lightly or skip initially on admin dashboards if only you use them, internal tools, marketing pages, and simple CRUD operations.
You can always add tests later. Ship first, then add coverage where bugs actually appear.
Conclusion
Testing is a tool, not a goal. The point isn't to achieve perfect coverage or write thousands of tests. The point is to ship code with confidence, catch bugs early, and make changes without breaking things.
Start with 80% coverage, focus on business logic, and use AI to handle the repetitive parts. Set up pre-commit hooks and CI/CD so tests run automatically. And remember: you're testing to move faster, not slower.
Your tests should serve your development workflow, not the other way around.