How We Built a Production-Ready Forge App in 2 Weeks Using Rovo Dev
(and lived to tell the tale)

You hear the noise everywhere. AI is writing code. AI is replacing developers. AI is the new junior engineer.
It’s easy to get cynical about it. Most of us have tried to get an LLM to write a complex function, only to spend the next hour debugging the "solution" it confidently handed over. You end up pushing the ball uphill, wondering if it would have been faster to just write the code yourself.
But recently, we decided to stop wondering and actually put it to the test.
We built Sync for Confluence, a full-featured app that syncs Confluence pages to Zendesk Knowledge Base articles. We didn't just use AI to write a few helper functions. We used Rovo Dev (Atlassian’s AI coding partner) to build the thing from the ground up.
The result? We went from a blank page to a working Proof of Concept (POC) in one day.
Then, we did something even crazier. We turned that rough POC into an enterprise-grade product with 15,000 lines of code and 100% test coverage on critical modules.
And we did it all in two weeks.
Here is the story of how we compressed six months of development into a fortnight, where we stumbled, and why we’d do it again.
The One-Day POC
We started with a goal that sounds simple on paper: Make Confluence talk to Zendesk.
Usually, when starting a project like this, you spend days on technical specifications. You argue about database schemas. You draw boxes and arrows on a whiteboard.
We didn't do that. We didn't have time.
We started with a high-level Product Requirement Document (PRD). It was intentionally vague. It outlined the functionality we wanted—"pick a space, sync it to Zendesk"—but it lacked the "how." We left the implementation details out on purpose because we wanted to see if Rovo could figure them out. We wanted to learn as we went.
We fed the requirements to Rovo Dev.
Within a single day, we had a working app. It wasn't pretty, and it certainly wasn't ready for customers, but it worked. You could type something in Confluence, hit a button, and see it appear in Zendesk.
That creates a specific kind of momentum. When you can see the end result on Day 1, the gravity shifts. You aren't pushing uphill anymore; you're guiding the ball downhill. You know it’s possible, now you just have to make it good.
The Sprint to Production (13 Days Left)
A POC is not a product. A POC is a prototype held together by hope and hardcoding. To make this "production ready," we had to get serious. We had less than two weeks to turn a toy into a tool.
The TypeScript Correction
This is where we hit our first major lesson.
We started the POC in JavaScript. It seemed faster for a one-day build. Rovo writes JS quickly, and it felt like the path of least resistance.
Do not do this.
Even in a two-week sprint, we regretted it. If we could go back to Day 1, we would have forced Rovo to use TypeScript immediately. We should have included a comprehensive Agent.md file (context for the AI) that explicitly stated: Strict typing only. No implicit any.
Mid-week, we realized we needed the safety of types to move at this speed without breaking things. We had to migrate from JS to TS. Thanks to modern tooling, the migration was straightforward, but it was hours we couldn't afford to lose.
The lesson: Start with TypeScript. Give the AI a strict set of rules before it writes a single line of code. It saves you from debugging type errors when you're moving at 100mph.
"Trust but Verify" Testing
One of the most tedious parts of software development is writing tests. In a two-week crunch, tests are usually the first thing to get cut.
But we couldn't afford to cut them. We needed to know the app wouldn't break when we refactored.
Rovo completely changed this dynamic. We asked Rovo to generate tests for our utility modules. It spit them out incredibly fast. We’re talking about a test suite that would have taken a human two days to write, generated in minutes.
But here is the catch: You have to read them.
Rovo sometimes created "fake tests." It would write a test that looked green but didn't actually test the logic, or it would hallucinate a scenario that wasn't possible in our code.
We had to review the tests with a fine-tooth comb. We had to get very specific with our prompts to ensure it was testing our actual code paths. But even with the review time included, it was exponentially faster than writing them from scratch.
We ended up with over 145 automated tests and 100% coverage on our utility modules. That is a level of rigor that is usually impossible in a two-week timeline. With Rovo, we got it for free.
The Iteration Velocity
Once the core sync was working and the types were locked in, we started layering on complexity at a pace that felt slightly uncomfortable. This wasn't a slow rollout; it was a blitz.
We weren't just moving text anymore. In a matter of days, we added:
Image Uploads: Getting binary data out of Confluence and into Zendesk isn't trivial.
Scale: We needed to sync 1,000+ pages without timing out or hitting API limits.
Performance: The loading speed had to be instant.
Polish: Adding animations, UI feedback, and error states.
Subtree Syncing: Allowing users to sync just a branch of the page tree.
The Storage Headache
One specific challenge was storage. We started using standard Forge storage. Later, we realized we needed secure storage for the Zendesk API credentials.
Migrating from legacy storage to secure storage (@forge/kvs) is usually a delicate operation. We worked with Rovo to write a centralized authentication utility quickly. We built logic that checked secure storage first, fell back to legacy storage if needed, migrated the data, and then wiped the insecure copy.
We implemented, tested, and shipped this migration logic in an afternoon.
The Licensing Trap
Another hurdle was licensing. We wanted a tiered model (Free, Standard, Advanced).
The problem? In the development environment, the Atlassian license context is always undefined. You can't test your licensing logic if the platform keeps telling you "no license found."
We ended up implementing a LICENSE_OVERRIDE environment variable. It allowed us to force the app into "Pro" mode locally while respecting the real license in production.
It’s a small thing, but without it, we would have been stuck.
Why Forge + Rovo is the Sweet Spot
Looking back at the architecture, the best decision we made was sticking with Atlassian Forge.
If we had tried to build this on AWS or Heroku, we would have spent the first week just setting up infrastructure. We would have been configuring databases, managing API keys, and setting up Node servers.
With Forge, we didn't do any of that.
No Servers: It runs on Atlassian’s infrastructure.
No Database Config: We used the Forge Storage API.
Security: Authentication is handled by the platform.
Rovo understands Forge intimately. It knows the APIs. It knows how the UI Kit works. It knows how to structure the manifest.yml.
When you combine a platform that handles the infrastructure with an AI that handles the boilerplate, you get to focus purely on the value. You spend your time figuring out how to make the sync logic smarter, not figuring out why your EC2 instance is unreachable.
By The Numbers
We didn't just hack this together. We built a machine. And we did it in a timeframe that shouldn't be possible for this scope.
Timeline: 2 weeks total (Start to Production Readiness).
Codebase: ~15,000 lines of TypeScript and React.
Reliability: 145+ automated tests with 100% utility coverage.
Scale: Successfully syncs 1,000+ page spaces.
Features: 12 major feature areas including enterprise licensing and encryption.
Final Advice
We learned a lot in two weeks. But we also looked at how other teams are succeeding with AI coding agents, and the patterns are strikingly similar. If you’re about to start your own sprint with Rovo Dev, here is the playbook.
1. The Agent.md is Your New Spec Sheet
The biggest mistake people make is treating the chat window like a search bar. It’s not. It’s a conversation with an engineer who has no memory of your business logic.
We learned (the hard way) that you need a "context file"—we call ours Agent.md—sitting in the root of your repository. This file shouldn't contain code; it should contain rules.
"Always use TypeScript."
"Use
constoverlet.""All Zod schemas go in
src/types.""Never use
any."Only write tests that actually test production code. (I know, crazy we have to define this)
Rovo reads this before it writes. It’s the difference between getting spaghetti code and getting architecture.
2. Prompt Like a Tech Lead, Not a PM
When we gave Rovo vague product requirements ("Make it sync images"), we got messy code. When we gave it engineering constraints ("Create an async function that fetches the image buffer, converts it to base64, and uploads it to Zendesk using their attachment API"), we got production-ready code.
The Golden Rule: If you can’t describe the implementation path, the AI will guess. And it will usually guess the easiest, least scalable way.
3. Bridge the "Knowledge Cutoff" Gap
Forge evolves fast. Rovo is trained on a massive dataset, but it might not know about an API change released two weeks ago.
We found that for newer features, we had to "inject" documentation. We literally copied the raw text from the newest Atlassian developer docs and pasted it into the chat, saying: "Read this documentation on the new Storage API. Now, write a function using this specific pattern."
Don't assume the AI knows the latest docs. Feed it the manual.
4. Treat AI Like an Eager Junior Developer
This is a sentiment shared by almost everyone effectively using AI tools. Rovo is incredibly fast and enthusiastic, but it lacks wisdom.
It will duplicate code rather than abstracting it.
It will skip error handling if you don't ask for it.
It will write tests that pass but test nothing.
You cannot fall asleep at the wheel. You must review the code. You are not replaced; you are promoted to Senior Architect. Your job is to spot the logic gaps that the AI glossed over.
5. Start with Strict Types (We Can't Stress This Enough)
If we could go back to Day 1, we would have forced Rovo to use TypeScript immediately.
In a two-week sprint, you don't have time to chase down "undefined is not a function" errors. We wasted precious hours migrating from JS to TS mid-sprint. Define your types upfront. When the AI knows the shape of your data, it hallucinates less and delivers more working code.
6. Small Batches Win
Don't paste a 500-line file and say "Refactor this." The context window gets muddy, and the AI loses the plot.
Break it down.
"Write the interface for the settings config."
"Now, write the validation function for that interface."
"Now, write the save handler."
Small, discrete tasks keep the AI focused and the quality high.
We built Sync for Confluence in two weeks. It wasn't magic, my engineering background came in handy, but it felt a lot less like grinding and a lot more like guiding.
And best of all, I had a lot of fun building.


