"You’ve got thousands of automated tests running, multiple test and coverage reports and logs – but you can’t see the forest for the trees. The problem is, you don’t know: Is it safe to release? With refined, specific metrics you can define reports (or dashboards) that tell you the real quality of the product. You can then decide what to do about it. This case study outlines how to build a quality dashboard with metrics and reports that matter for an application with hundreds of APIs and multiple frontends in a big financial institution. Some features were better covered than others, but what that coverage meant was vague. The dashboard was built to collect information from multiple sources – test reports and coverage reports from Jenkins, custom logs that were farmed for information, SonarQube and more. We then added some “brains” to show the analyzed metrics in terms of covered and uncovered test cases, test quality and more. We then presented a confidence level calculated from those metrics. Developers, quality advisors, DevOps and others all contributed. The dashboard helps managers see what features are ready and where the gaps are and gave feedback to the developers as to how well their tests are working. This session will inspire you to build quality reports that tell you how well your team is doing."