
“We could just build this ourselves.“
If you’ve been in engineering leadership for any length of time, you’ve heard this phrase in countless discussions about tooling. It’s a natural response, especially when it comes to engineering intelligence tools. After all, you have a team of talented engineers who build software for a living — why not point those skills at your internal needs?
However, the reality is more complex than it first appears. Having spoken with hundreds of engineering leaders and helped organizations navigate this exact decision, I’ve discovered some clear patterns in the journey from “let’s just build it” to a more nuanced understanding of the tradeoffs involved.
I figured now was about time to discuss the typical journey organizations take when considering engineering intelligence tooling, namely:
So here goes.
Most engineering organizations follow a similar path when thinking about engineering effectiveness tooling.
It usually starts with solving isolated corners of the problem — maybe you want to track DORA metrics, improve CI pipelines, or gather more feedback from developers.
These initial efforts mostly come from good intentions. Someone notices an issue in delivery flow, or a directive comes down from leadership to improve visibility into engineering metrics. A small group starts building a simple dashboard, perhaps using tools you’re already paying for, like a BI platform connected to GitHub or Jira data.
As the organization grows, they try stitching these corner-of-the-problem solutions together, creating a patchwork of tooling that requires an increasing amount of maintenance. Each piece might solve its narrow problem well, but they typically don’t connect to provide a comprehensive view of engineering effectiveness.
The third phase is where reality sets in. As these projects progress, companies discover data quality and maintenance issues that weren’t apparent at the start. Teams realize that engineering data is surprisingly complex. Seemingly simple questions like “which teams do we have and who’s in them?” or “what’s the relationship between this Jira project and that GitHub repository?” become mapping nightmares that constantly need updating.
Then comes the recognition of total cost. Only after significant investment do teams fully understand the development, implementation, and ongoing maintenance costs. What started as a “simple dashboard” has become a complex system requiring dedicated engineering time that could otherwise be spent on your core product.
It's worth noting an important distinction here: while many organizations can successfully build specific reports or isolated dashboards in-house, building a complete engineering intelligence platform is an entirely different challenge. Reports address specific questions at a point in time, but a platform needs to handle changing team structures, evolving codebases, historical data preservation, and much more, all while maintaining data quality and providing useful insights.
Engineering should always be viewed as an investment with clear ROI expectations, so while building internal tools might seem like a tactical win for your platform team, buying a solution allows you to approach engineering intelligence investment more strategically.
There’s another option that’s become popular recently: sitting down with an AI coding assistant and vibe coding your own engineering intelligence dashboard in a weekend (or so they say).
For certain situations, this works fine.
If you primarily need to report some aggregate numbers from a point in time up the chain to leadership, a vibe coded dashboard can get the job done. You want to show cycle time trends at the monthly board meeting? Pull some GitHub data, create a few charts, present them. Mission accomplished.
This approach makes sense when your main goal is reporting rather than improvement, you have someone whose job includes compiling metrics, and leadership is satisfied with high-level trends without needing to drill into details.
The limitations show up fast when you move from just reporting, to actually trying to make engineering more effective.
When someone asks “Why did cycle time spike in Q2?” you need to see individual PRs, specific bottlenecks, the outliers that skewed everything. Your standard vibe-coded dashboard typically shows aggregates without the data structure to explore what’s behind them. You end up manually digging through GitHub, which defeats the entire purpose.
Data quality also becomes a problem as your organization evolves. That mapping between teams and repositories works great until someone creates a new repo, a team splits, or engineers move around. Suddenly your metrics are off and someone needs to manually update the configuration. Then another change happens. And another.
The solution still requires consistent maintenance. APIs change, requirements emerge, your org structure shifts. Unlike a proper internal build with architecture and documentation, AI-generated solutions often become black boxes that only their creator understands.
Most importantly: reporting isn’t the same as improving. Seeing that your cycle time is high doesn’t tell you where the bottlenecks are or what to do about them. Basic dashboards can highlight these problems, but can’t help you solve them.
If you’re investing the ongoing time to maintain an AI-generated solution and keep it useful for your organization, you’ve essentially committed to building internal tooling — just with a less robust foundation.
At that point, you’re not simply maintaining some code. You’re fielding questions about metrics that seem off, explaining methodology to stakeholders who don’t trust the numbers, updating configurations every time your org changes, adding features as requirements evolve, and troubleshooting data pipeline issues.
If you’re making that investment anyway, those hours might be better spent on a purpose-built solution that can actually help you improve, rather than just report.
Despite the challenges, there are a few legitimate reasons to build in-house rather than buy engineering effectiveness tooling:
If you don’t fit clearly into one of these categories, it might make sense to question whether building is truly the right approach for your business.
So let’s assume you do fit into one of the categories above, and you’re off on your build journey. Good stuff. There are, however, a few challenges that are likely to pop up unexpectedly (and despite your best efforts to avoid them).
Internal tools for engineering intelligence are often painfully average, in terms of usefulness.
If it was simple, easy, and universally beneficial for organizations to build engineering intelligence tools internally, companies like Swarmia wouldn’t exist. But we do, and the engineering intelligence market is growing rapidly, which should indicate something about the complexity of the problem.
Low adoption rates tend to happen because platform teams building engineering intelligence tools aren’t the end users, and they may lack the design expertise needed for good user experience. There’s also the internal cost of onboarding, which if fumbled, can mean an otherwise useful tool or dashboard gathers dust on the shelf.
This can end with metrics and dashboards that don’t lead to meaningful change, wasted engineering effort, and a return to gut-based decision making.
Data quality is the iceberg that sinks many DIY engineering intelligence efforts — tactical solutions miss underlying themes and connections, leading to disjointed metrics. For example, a ticket-completion dashboard disconnected from code review metrics might show impressive velocity while hiding critical bottlenecks in the review process.
Data mapping can be a challenge for internal tools too. Team structures change, Jira projects come and go, and repositories get created and archived. Without a dedicated system for maintaining these relationships, your metrics become unreliable at best, and at worst, so obviously wrong that people actively distrust and dismiss them.
Historical data tracking is another thing. Questions like “who was in this team six months ago?” become nearly impossible to answer reliably without purpose-built systems, creating blind spots when trying to understand how your team performance has evolved over time.
When building and maintaining internal tooling, most time ends up spent explaining and defending metrics rather than improving them. When someone questions why a number looks off, you need to be able prove the data is correct. Without this ability, trust erodes quickly.
Supporting API changes from GitHub, HR systems, Jira, and other tools requires constant attention. Every time one of these services updates their API, someone needs to update your internal tools as well. This maintenance is rarely accounted for in the initial build assessment.
There’s also the risk of knowledge loss when key team members leave. I’ve seen multiple organizations where just a single engineer understood how their metrics systems worked. When that person left, the entire system became a black box that no one was game enough to touch.
Finally, there’s the opportunity cost of dedicating engineering resources to non-core business functions. Every hour spent maintaining internal metrics tooling is an hour not spent building your actual product.
The most important step now is to evaluate your specific situation. Does your company fit into one of the scenarios where building makes sense? Or would a purpose-built solution provide faster, more reliable results with less risk?
To guide your decision and start to build your business case for leadership, gather your team to answer these key questions:
Honest answers to these questions will point you in the right direction, and also help you start your ROI calculations — but that’s a whole other article.
If you’ve made it this far, you’ll likely encounter some more practical concerns about buying a tool, and for the purposes of this article, I’ll answer them from a Swarmia point of view:
This all might seem simple on the surface — just pull some data from GitHub and Jira and display it on a dashboard and you’re on your way. Right?
As I hope we’ve covered here: not really.
Data quality, user adoption, platform maintenance, and careful metric selection create a challenge that’s far more difficult than most companies have the resources for.
When you buy a specialized solution instead of building one, you’re not only saving development time, but drawing from the collective learning from hundreds of other organizations facing similar challenges. You’re benefiting from ongoing research into what metrics actually drive improvement. You’re freeing your team to focus on what makes your business unique.
I will say this: buying a tool like Swarmia won’t magically solve all your engineering effectiveness challenges.
What it does do is give you the visibility, confidence in your data, and insights you need to make engineering intelligence an active process, not just a metrics tracking and reporting exercise.
In the end, the build vs. buy question isn’t whether you could build engineering intelligence tooling — but whether your engineering resources are better spent delivering customer value rather than reinventing wheels that already exist and roll quite well.
And I think the answer, for the most part, is clear.
Subscribe to our newsletter
Get the latest product updates and #goodreads delivered to your inbox once a month.


