Submitter metrics live inside the Team Collaboration Network tab of the Performance Delivery dashboard. They focus on what PR submitters do to move a pull request toward merge, so you can see where author-side actions are helping (or holding up) the review cycle.
The four sub-metrics
Iterated PRs — the percentage of merged pull requests that had at least one follow-on commit. A high number means PRs are routinely revised after first submission, which is normal in code review; a very high number can mean PRs are landing too early.
PR Iteration Time — the average time between the first non-submitter comment on a PR and the submitter's final commit. This isolates the "respond and revise" loop from total PR age.
Responsiveness — the average number of hours a submitter takes to respond after a reviewer action, rounded to the nearest tenth of an hour. Captures how quickly authors come back when feedback lands.
Time to Merge — the average time in hours between PR creation and merge into the target branch, rounded to the nearest tenth. The end-to-end submitter outcome.
How to interpret them
A healthy team has moderate Iterated PRs, short PR Iteration Time, and responsive authors. The combination of all three is what produces a low Time to Merge.
If Time to Merge is high but Responsiveness is good, the slowdown is on the reviewer side. Pair this view with Reviewer metrics.
If Iterated PRs is very high and PR Iteration Time is long, PRs are likely too big — large diffs invite more rounds of revision.
What to do about it
Trim PR size. Submitter metrics improve almost everywhere when PRs are smaller and easier to revise in one pass.
If Responsiveness is low for specific authors, check their workload before coaching — it's usually load, not engagement.
Always pair Submitter and Reviewer metrics. The two together explain most of the review-cycle delay you'll see in Lead Time for Changes.
Related
Reviewer metrics
Team Collaboration Network — overview
Lead Time for Changes
