Skip to main content
Technical Guide 15 min read

Supercharging Jira Forge Development with Rovo Dev CLI: A Complete Guide

Discover how to leverage Atlassian's Rovo Dev CLI to transform your Jira Forge app development workflow. From spec-driven development to automated testing with Chrome DevTools, learn the advanced techniques that can 10x your productivity.

AB

AOBRAIN Team

Engineering

In this guide, we walk through how to leverage Rovo Dev CLI for developing a Jira Forge app, from adopting a spec-driven development approach to integrating advanced tools and running the CLI in pipelines. Each section below addresses one of the key tasks:

1. Spec-Driven Development for a Jira Forge App Project

Write a Specification First: Start by writing a clear specification for your Forge app before coding. This spec (e.g. a Markdown file describing requirements, user stories, API contracts, etc.) will serve as the source of truth for both you and the AI agent. Tools like GitHub's Spec Kit or Amazon's Kiro can help structure this spec, but you can also write it manually in natural language focusing on the app's behavior and features. The spec might include the app's purpose, Forge module types, UI design (if using UI Kit or Custom UI), and acceptance criteria.

Add Spec to Rovo's Context: Include the spec in Rovo's context so it can guide code generation. For example, you can place key details from the spec into Rovo Dev's memory or instructions for the project. Rovo supports a project-specific memory file (.agent.local.md in your repo) where you can add notes, guidelines, or architecture overviews. Summarize important spec points in this file – for instance, define the app's high-level design or coding standards there – so the AI will always consider them during generation. You can create or edit the memory file by running the CLI command /memory init or /memory in the project directory.

Plan and Break Down Tasks: Next, have Rovo Dev CLI help turn the spec into an implementation plan. Open Rovo Dev in interactive mode (e.g. run acli rovodev run in your app's directory) and share the spec context. You might instruct: "Analyze the specifications in SPEC.md and list the development tasks needed." Rovo can then generate a breakdown of steps or a TODO list. It's best to divide the work into small, manageable tasks, because Rovo works most effectively on changes ~10–20 lines at a time. For example, if the spec describes multiple features, tackle them one by one (backend API, UI module, etc.) rather than all at once. Rovo's tips emphasize starting small and breaking bigger changes into smaller steps.

Implement Iteratively with Rovo: Use an iterative loop to implement each task:

  • Generate Code: Ask Rovo to write code for a specific task from the spec. For instance: "Create a Forge Jira module that adds a project page per the spec" or "Implement the backend resolver for feature X as described in the spec." Rovo Dev will draft the code (files, functions, etc.) accordingly. It aims to produce consistent, structured code to match your requirements.
  • Review and Refine: Treat Rovo as a teammate and review its output. After code generation, you can say "Review this code for compliance with the spec and improve where needed". If the code misses details or fails a spec acceptance criterion, give feedback. For example: "The spec says to include an admin setting – please add that." Rovo will refine the code in response. This iterative review cycle is recommended – Rovo Dev expects you to give feedback and corrections, much like a pair-programming partner.
  • Proceed to Next Task: Once one piece is satisfactory, move on to the next item in the plan. This spec-driven, step-by-step approach keeps the AI focused and aligned with the spec, and it prevents getting off-track.

Throughout development, continuously refer back to the spec. You can even copy-paste relevant spec excerpts into the chat when focusing on a particular feature, to anchor Rovo's responses on the requirements. The spec effectively acts as a compass for the AI – by maintaining it as the single source of truth, you ensure that code changes remain aligned with intended behavior. Also remember to save and commit frequently during this process. Use source control to your advantage: commit each small change that Rovo makes. This not only saves progress but also lets Rovo compare against the repository history if needed (it can use Git diff info to understand context or verify changes). Rovo Dev even suggests committing regularly to avoid losing work.

Finally, once all spec items are implemented, you can have Rovo generate documentation and tests to complete the spec-driven cycle. For example, ask it to "Write a README.md for the project" or "Create unit tests for all new functions". Rovo can generate JSDoc comments, README sections, and test cases based on the code and spec. By following a spec-first approach, you've effectively used Rovo Dev CLI to go from specification to working Forge app code, while ensuring the AI never strays far from the defined requirements.

2. Building a Knowledge Base from Forge Documentation (for Accurate Code Generation)

Having up-to-date reference knowledge is crucial for accurate code generation – especially for a platform like Atlassian Forge, which has specific APIs and best practices. To help Rovo Dev produce correct code (and avoid hallucinations about Forge APIs), you should create a knowledge base of Forge documentation that the AI can draw upon.

Gather Latest Forge Docs: Start by collecting the official Jira Forge documentation. You can obtain these from Atlassian's developer site (e.g. "Build a Hello World app," "Forge UI kit reference," etc.). Copy the relevant documentation pages (API references, tutorials, etc.) into Markdown (.md) files. Organize these docs in a folder (for example, docs/forge/). Ensure the content is in a readable format – Markdown is great since it's text-based. For instance, you might save pages like "Forge UI Kit Field components", "Forge CLI commands", and any Forge modules or Jira API docs that your app will use. The goal is to have a local cache of authoritative information that Rovo can reference.

Use Structured Content: If possible, structure the knowledge base for easy retrieval. Break large documents into logically separated files (e.g. separate files for Forge UI components, Forge manifest syntax, Jira REST API, etc.). This makes it easier for a retrieval system to find the right info. Each file can have a clear title and maybe an index at top. For example, Forge-UI-Components.md, Forge-Auth.md, Jira-REST-API.md, etc. Having a set of Markdown files as your knowledge base will serve as the corpus for the AI's retrieval augmented generation, meaning the AI can pull in precise snippets from these files when writing code or answering questions.

Keep the Knowledge Base Current: Forge is evolving, so use the latest documentation. The idea is to give Rovo the latest Forge APIs and guidelines so it uses proper functions and avoids deprecated ones. If you maintain this knowledge base over time (updating .md files when Atlassian updates their docs), Rovo's suggestions will remain accurate and up-to-date. In fact, Atlassian itself identified staying up-to-date as a major pain point and introduced ways to auto-access the newest Forge docs, which we'll leverage in the next step.

(With your spec and knowledge base ready, the next step is to connect this documentation to Rovo Dev CLI so it can actually use it during code generation.)

3. Connecting a RAG Source to Rovo CLI via MCP

To use the documentation knowledge base in practice, you'll integrate it via Retrieval-Augmented Generation (RAG). Rovo Dev CLI supports external data sources through the Model Context Protocol (MCP) – a protocol that lets the AI fetch context from third-party sources when responding. In our case, the "third-party" source will be the Forge documentation (either via Atlassian's official MCP or your custom files). Here's how to set that up:

Configure an MCP Server for Documentation: Rovo Dev CLI uses a config file ~/.rovodev/mcp.json to know about available MCP servers. You need to add an entry here for a documentation retrieval service. Atlassian provides an official Forge Knowledge MCP server that is highly convenient: it's essentially a hosted service that Rovo can query for Forge dev docs. To use it, add a new server definition in mcp.json. For example, include the following JSON snippet (under "mcpServers"):

{
"mcpServers": {
"forge-knowledge": {
"url": "https://mcp.atlassian.com/v1/forge/mcp",
"transport": "http"
}
}
}

This is the Forge MCP server provided by Atlassian. Once added, restart or run Rovo Dev CLI and it will detect this new MCP. The first time you use it, Rovo will prompt you to approve using the Forge knowledge server (since it's an external source) – go ahead and approve it. After that, Rovo can seamlessly query the latest Forge docs whenever the prompt calls for it. The benefit of using Atlassian's Forge MCP is that it's always up-to-date with the newest Forge developer documentation, automatically using the latest APIs and avoiding deprecated ones. This dramatically improves the accuracy of code suggestions related to Forge.

(Alternative: Custom Local Knowledge MCP) – If you prefer to use your own set of markdown files (the ones you gathered in step 2) instead of Atlassian's service, you can run a local MCP server that indexes those files. For example, there are open-source MCP servers like MCP-Markdown-RAG that use a vector database to serve relevant markdown content. You would set up such a server (often a small Python or Node app) to read your .md docs and answer queries. Then add it to ~/.rovodev/mcp.json similarly (with "command" and "args" pointing to the server binary). One common setup is the Fetch MCP server, which can retrieve content from URLs or local files and return it as context. For instance, an MCP config could use mcp-server-fetch to fetch documentation pages by URL on the fly. However, for a curated set of local files, a dedicated markdown RAG server (using something like Chroma or Milvus for embeddings) might be ideal. This is an advanced route – if you're not comfortable setting up a vector database, the official Forge MCP is the easier choice.

Using the RAG Source in Prompts: Once the MCP server is configured and running, Rovo Dev will use it automatically when needed. There is no need to manually query it; the AI model decides when to call the MCP "tools" to fetch info. For example, if you ask "How do I call the Jira REST API in Forge?", the agent will recognize this might need documentation lookup, and it will invoke the Forge knowledge MCP to pull the relevant snippet (like the Forge Bridge API usage). You'll see in the CLI output that it uses a tool (MCP) and then incorporates the documentation content in its answer. This enriched context ensures the code it generates or suggestions it gives are grounded in real docs rather than guesses.

Approval and Security: Note that every third-party MCP, when accessed, may require your approval due to security controls. Rovo Dev CLI by default only auto-connects to Atlassian's own context (your code, Jira issues, etc.). When using external sources (Forge docs service, Chrome, etc.), it will ask you to confirm. This is by design to ensure you trust the source and comply with any org policies. After you approve an MCP the first time, subsequent uses in the session should proceed. Keep this in mind if you run Rovo in a non-interactive environment (we'll address that in the pipeline section) – you might need to pre-approve or run once in interactive mode to accept the MCP usage.

In summary, by connecting a RAG source via MCP, you empower Rovo Dev CLI to fetch just-in-time knowledge from your documentation. This means when you prompt it to generate code or explain something specific (say, "Use the Forge UI Kit to create a select dropdown"), it can pull the actual code examples and parameter details from the Forge docs and incorporate them into the answer. The result is much more accurate and contextually correct code generation for your Jira Forge app.

4. Integrating Chrome DevTools via MCP for Testing the Forge App

Once you have your Forge app code in development, a powerful capability is to let Rovo Dev CLI test and debug the running app using Chrome. This is made possible by the Chrome DevTools MCP server – an official MCP integration that gives AI agents control over a Chrome browser. By connecting this to Rovo, you can ask the AI to open your app in a browser, simulate user actions, inspect the DOM, check console logs, run performance profiles, and more, all automatically. This effectively gives Rovo "eyes" on your web application, solving the problem that coding agents traditionally can't see the app they build.

Set Up the Chrome DevTools MCP: In your ~/.rovodev/mcp.json, add a new entry for the Chrome DevTools server similar to below:

{
"mcpServers": {
"chrome-devtools": {
"command": "npx",
"args": ["chrome-devtools-mcp@latest"]
}
}
}

This configuration (using the chrome-devtools-mcp package via npx) will allow Rovo to spawn a Chrome DevTools MCP instance when needed. Save the config and run acli rovodev run again. The first invocation will prompt you to approve the Chrome DevTools connection (since it's an external tool); approve it to continue. Make sure you have Google Chrome installed on the machine, as the MCP server will launch a Chrome instance in the background.

Deploy and Run Your Forge App: In parallel, have your Forge app running so that Rovo can test it. For a Forge app, "running" means it's deployed to the Atlassian cloud and installed on your development site (which might be a Jira Cloud instance). Typically you'd use forge deploy and forge install to get the latest version up, and perhaps forge tunnel if you want to see logs. Once the app is available on your site, you can open the Jira page or project where the app is present. Now Rovo (through Chrome MCP) can do the same. It helps to know the URL or context where the app runs – for example, if it's a Jira issue panel, the URL of a sample issue, etc.

Use Rovo CLI to Test in Browser: With Chrome MCP active, you can issue natural-language commands to Rovo Dev CLI to verify and debug the app in the browser. Some example interactions:

  • "Open the Jira page with our Forge app and verify that the new Hello World module is visible." – Rovo will launch Chrome to the specified URL, using the DevTools protocol to load the page. It can then check the DOM for your app's content and respond with what it finds (or if something is missing). If the Jira site requires login, ensure a session is active or provide credentials – the Chrome MCP can use a fresh Chrome profile, so you might need to log in manually the first time it opens (it will open a real browser window you can see and control if running locally).
  • "Simulate a user clicking the Submit button in the app and report any console errors." – Rovo can use the DevTools MCP's tools to simulate DOM events (clicks, input typing) and read the console log or network calls. For instance, Chrome MCP provides a tool to fetch console logs or network traces; Rovo will call those and interpret the results. This helps catch issues like runtime errors or failed XHR requests in your Forge app.
  • "Why is nothing happening when I select an option from the dropdown?" – Rovo can inspect the page's state. It might check if an onChange handler is wired, or if there are errors in console when that action occurs. Essentially, it's performing a debug session: looking at DOM elements, JavaScript errors, network responses, etc., via DevTools, and then using its reasoning to explain the problem.

The Chrome DevTools MCP unlocks many such debugging tasks. According to the Chrome team, an AI agent can now verify that code changes actually work by opening the site and observing it in real time. For example, after Rovo generates a code fix, you can say: "Verify in the browser that your change works as expected." Rovo will spin up Chrome, load the page, and confirm if the issue is resolved. If there's still a problem, it can gather details to explain why. You can also directly ask it to diagnose issues: for instance, "A few images on our app's page aren't loading – what's happening?" Rovo (through Chrome) might discover 404 network errors or CORS issues and report that back.

Another powerful use-case is performance testing. You could prompt: "Profile the app's loading performance" and Rovo would use a tool like performance_start_trace from the DevTools MCP to record a performance trace. It can then analyze metrics (like LCP, TTI) and suggest optimizations. For example: "The page is loading slowly; find out why and suggest improvements." The AI might run a trace and then identify, say, a slow network request or inefficient DOM updates, and propose a fix.

Troubleshooting & Headless Use: When using Chrome MCP, keep in mind that it's effectively driving a browser. If running on a server or headless CI environment, ensure Chrome can run (you might need xvfb for headless display). The Chrome MCP can operate headlessly by default, so it doesn't necessarily open a visible window in such environments. Also, each time it runs, a new temporary Chrome profile is used (for isolation), so you may need to handle login if your app is behind auth. One approach is to allow the agent to navigate the login (noting that doing so requires it to handle credentials – not usually configured by default). For development convenience, you might disable auth or use test mode if possible when letting the AI agent browse.

By integrating the Chrome DevTools MCP, you effectively extend Rovo Dev CLI's capabilities from coding into testing. It can continually test the Forge app as it's being built – opening pages, clicking buttons, checking logs – and feed that feedback into the development cycle. This means faster debugging and a higher confidence that the code actually works in the real Jira environment, not just in theory.

5. Running Multiple Rovo Dev CLI Instances in Parallel (Without Blocking)

If you have a scenario that would benefit from running multiple Rovo Dev CLI sessions concurrently (for example, two different tasks or two projects at the same time), you need to be aware of Rovo Dev's usage model. By default, a single user/session may be limited in how many simultaneous operations it can perform, both due to resource limits and credit/quota constraints. Typically, one Rovo Dev CLI session runs on your Atlassian site's allocated AI budget. In fact, each session can run for up to ~4–5 hours, after which it ends and you must wait ~8 hours for a new session to start on that same site. This essentially prevents continuously running back-to-back without pause, and it also implies you cannot normally run two sessions at the exact same time on one site (because one would consume the available runtime or credits and block the other).

However, there are a few strategies to run multiple instances without them blocking each other:

Use Separate Atlassian Sites or Accounts: The easiest isolation is at the account or site level. If you have access to more than one Atlassian site (each with Rovo Dev enabled) or multiple user accounts with Rovo privileges, you can run one CLI instance per site/account. Each will draw from a separate credit allocation. In Rovo Dev, you can actually switch which site's credits you're using by the /usage site command if your account has multiple sites – but for truly simultaneous usage, launching distinct instances logged into different sites is more straightforward. For example, you might configure one instance to use "Site A" and another to use "Site B". That way, each instance has its own execution quota and won't interfere or cause the other to hit a limit.

Use Separate Configurations: Rovo Dev CLI allows running with an alternate config directory, which is useful for isolating sessions. You can do this via the --config-file flag. For instance, open two terminals and run:

acli rovodev run --config-file ~/.rovodev_config2

This will create/use a separate config in the specified directory for the second instance. The second instance will treat itself as a fresh installation – you'll need to auth login again (here you could log in with a different account or the same account on a different site). By using two config locations, you effectively run two independent CLI agents. They won't share memory files, settings, or sessions, and importantly they won't lock each other in terms of local resources. Do ensure that each config is logged in to a distinct site (or at least a distinct user) if the goal is to avoid the usage quota collision mentioned earlier.

Avoid Shared Resource Conflicts: Running multiple instances on the same repository is generally fine – each process will operate in the directory it was launched. Just be careful if they might both try to modify the same files simultaneously. It's usually better for them to work on separate tasks or separate areas of the code to avoid git merge conflicts. Also, if both instances use tools that have side effects (like running npm install or starting a dev server), monitor those to not step on each other. Typically, though, parallel read/write is rare since you'll be prompting them distinctly.

Subagents vs. Separate Instances: Rovo Dev CLI also has a feature called subagents (specialized AI agents spawned within a session). However, subagents do not run truly in parallel; they are invoked by the main agent as needed (and their context is limited). They are more for delegating subtasks (like "write a regex" to a regex-specialized subagent) within one session, rather than for running two full sessions concurrently. So for independent parallel work, separate CLI processes as described above are the way to go.

Manage Execution Time: Even with multiple instances, remember the underlying limits: each session will eventually hit its max duration. If you try to brute-force two long-running sessions with one account on one site, you might simply deplete the allowance faster or get one of them refused. Using distinct accounts/sites is the safe way to truly parallelize. Also consider staggering their start times if they share resources.

In summary, to run multiple Rovo Dev Agent CLI sessions without blocking: launch each with an isolated configuration and (ideally) separate Atlassian credentials. This ensures that one won't pause or consume the quota needed by the other. As a quick reference, Atlassian documents that you can create a new config on the fly with acli rovodev run --config-file [directory]. After setting up multiple configs (and logging each into its respective site), you can work with two terminals/instances simultaneously. Each will maintain its own memory and sessions. This way, you could, for example, have one Rovo agent building feature A while another Rovo agent experiments on feature B in a different branch – doubling your productivity.

(Be mindful of the credit usage on each – keep an eye on /status or /usage in each session to not exceed limits. If one instance hits a limit and pauses, it shouldn't directly stop the other, but it's a sign that that site/account is tapped out.)

6. Running Rovo Dev CLI as a Daemon in CI Pipelines (Automating via Prompt Files and PR Creation)

Automating Rovo Dev CLI in a continuous integration/continuous deployment (CI/CD) pipeline can unlock hands-free development – for example, automatically generating code or fixes and creating pull requests. The idea is to run Rovo as a headless daemon that takes input (a prompt from a file) and produces output (committed code, a PR, or a report) without human intervention. Here's how you can set that up:

Set Up Authentication for CI: First, ensure the pipeline environment can authenticate Rovo Dev CLI. Typically, Rovo uses an OAuth login via acli rovodev auth login (which is interactive). In CI, you won't have a browser, so you'll need to use a token-based auth if possible. Atlassian might allow using an API token or app password – check Rovo CLI docs for headless auth options. One approach is to perform an auth login on a dev machine and copy the credentials (stored in ~/.rovodev/config.yml or similar) into your pipeline's secure storage. Another is to use Atlassian Forge app credentials if running under the same site. Regardless, the goal is that when the pipeline starts Rovo CLI, it is already logged in (or can log in via provided credentials) so it doesn't hang waiting for login.

Non-Interactive Execution Mode: By default, acli rovodev run opens an interactive REPL session. In CI, we want it to execute a given prompt and then exit. There are a couple of ways to achieve this:

  • Using Server Mode API: Rovo Dev CLI has a server mode where it listens on a port for commands. You can start the CLI in the background in your pipeline (e.g. acli rovodev serve 8123 --yolo) and then script interactions via HTTP. For example, you can POST a JSON payload with your prompt to http://localhost:8123/v3/set_chat_message and then read the response from the /v3/stream_chat endpoint. This is effectively using Rovo as a service. The benefit is you can programmatically send the prompt (which could be read from a file or environment variable) and capture the output. You'll likely want --yolo mode on (or pause_on_call_tools=false) so it doesn't wait for tool permission prompts in the middle of the run. For instance, to have Rovo propose a change, you might send a prompt like "message": "Implement function X according to the specs (provided above)" via the API, then monitor the streamed response. Once done, you could have Rovo's answer and any code it produced.
  • Using Standard Input (Stdin) Redirection: A simpler, if somewhat hacky, method is to pipe a prompt into the CLI. For example, store your prompt in a text file (let's call it prompt.txt). This file could contain a directive like: Implement the Foo feature as described in SPEC.md, and commit the changes. Then in the pipeline, run:
    acli rovodev run --yolo < prompt.txt
    This will feed the content of prompt.txt to Rovo as if it was typed by the user. With --yolo enabled, Rovo will not pause for any confirmations (it will auto-approve tool usage, etc.). The CLI will execute that single instruction. The tricky part is getting it to terminate after completion: since it's an interactive tool, after executing the input from the file (which ends with an EOF), it should ideally exit. In practice, Rovo CLI might stay open waiting for more input. If that happens, you can try to append an /exit command after your prompt in the file to force it to quit once done. For example, prompt.txt could be:
    /instructions run-my-task
    /exit
    (Here "run-my-task" would be a saved instruction; see below.)
  • Using Saved Instructions: If you have a complex multi-step prompt or specific sequence you want to run, consider defining it as an instruction macro. Edit the file ~/.rovodev/instructions.yml to add a custom instruction. For example, you could add:
    - name: run-my-task
      description: Run automated task from pipeline
      content_file: pipeline_task.md
    and then put your actual prompt (which might be very long, including system messages or multi-turn conversation) in pipeline_task.md. In the pipeline, you'd run acli rovodev run --yolo and pipe in /instructions run-my-task as the input. This will make Rovo execute the predefined instruction content. The advantage is you can tweak the prompt easily without changing the pipeline code, just by editing the instruction file in source control.

Automate Pull Request Creation: Your prompt should instruct Rovo on what outcome is expected. If the goal is to have a PR at the end, explicitly ask Rovo to create one. Rovo Dev CLI is integrated with Bitbucket (and possibly GitHub) to the extent that it can commit and push changes on your behalf. For example, if your repository is a Bitbucket repo and you have configured credentials (SSH keys or app passwords) such that git push works non-interactively, Rovo can directly push a new branch. You might prompt: "Implement Feature X and create a pull request with the changes." Rovo will then go through the motions: generate the code changes, stage and commit them (it usually crafts a commit message from context or your instruction), then push to a new branch, and finally use Bitbucket's API to open a pull request from that branch to the main branch. In fact, Rovo Dev can submit draft pull requests as part of its workflow to, for example, run CI checks. The documentation notes that you can have it open a draft PR to see CI/CD pipeline results – in automation, you might skip the "draft" status and just open a normal PR if that suits your process.

To ensure this works:

  • Set up Git credentials in the pipeline environment (e.g., add an SSH deploy key to the repo and load it in the pipeline, or use a bot account with auth).
  • Make sure Rovo knows about the repository remote. If you ran Rovo locally, it likely detected the git remote. In CI, if the repo is checked out, Rovo should detect it too. If not, you may need to configure git in the pipeline (like git config user.name and user.email for the commit).
  • You might need to configure Bitbucket integration in Rovo (possibly the Atlassian MCP already covers Bitbucket, since it's Atlassian's own context). There is likely an Atlassian internal MCP or tool for Bitbucket that Rovo uses by default (the status output from server mode shows bitbucket: running as an MCP service, indicating Rovo can already interact with Bitbucket). Ensure your pipeline login (or the site admin) has granted Rovo Dev permission to access Bitbucket if required.

Handling Success/Failure in Pipeline: Design the pipeline step to detect whether Rovo achieved the goal. For example:

  • After Rovo runs, check if a new commit was added to the repository (you could compare git commit hashes before and after). If yes, consider that a success (the changes were made).
  • If using server mode, you can parse the API response to see if Rovo completed the task or encountered errors.
  • If Rovo times out or fails (no changes, or it outputs an error), you should fail the pipeline. Capturing the CLI's exit code is one way – Rovo might return a non-zero exit code on error. You can also scan its output logs for certain failure messages.
  • Put a reasonable timeout on the Rovo execution in CI. Since Rovo can technically run for hours, you likely don't want your pipeline hanging that long. If the prompt task is complex, maybe set a timeout (most CI systems allow job timeout). You can also break the task into smaller ones if needed.

Example Pipeline Workflow: Suppose you want to use Rovo to automatically fix lint errors and open a PR. You could have a pipeline step like:

- step:
  name: "Auto-fix lint and PR"
  script:
  - echo "/instructions autofix-lint" | acli rovodev run --yolo
  - # (the above will run the saved instruction 'autofix-lint' which, say, fixes ESLint issues and commits code)
  - ./verify_pr_created.sh

Here verify_pr_created.sh could be a script that checks if a PR exists (perhaps using Bitbucket API or checking git log for new commits) and exits 0 if yes or 1 if no, thereby passing or failing the step accordingly.

Finally, note that Atlassian is actively integrating Rovo Dev into CI – e.g., Rovo can parse build failures and suggest fixes, or summarize changes in pipelines. In Bitbucket Pipelines beta, Rovo can automatically analyze failed steps and even generate summaries of deployments. But those are built-in behaviors. Our approach here is a custom one where we drive Rovo via CLI in our pipeline. Until Atlassian provides a more direct "Rovo Dev Pipeline Bot", this method gives you a lot of flexibility. You essentially script the AI to do whatever you want in the CI context.

By running Rovo Dev CLI as a daemon or automated agent in your pipeline, you achieve a form of AI-driven continuous development. For example, you could schedule a nightly pipeline that runs Rovo with a prompt to update dependencies or fix known bugs, and it will open a PR with those changes ready for review. Just remember to use --yolo (or otherwise handle tool approvals) so it doesn't stall waiting for input.

Sources