In Sprint 19, we caught up on our technical debt, squashed a few bugs, and positioned the team to start building priority features from the Bloomfilter backlog. We also ramped up our focus on design, creating the concepts that will help form the new home page for signed-in users. Exciting updates are coming your way—stay tuned!
Sprint goal
Based on feedback from early adopters regarding our project statistical charts, we set a goal to quickly update the design and data visualization of historical views of time-based charts. Now, you can get a 16-sprint view of important process measures like lead time, reaction time, throughput, velocity, and cycle time.
Updates
Customer facing
We’ve improved the detail-level view of our project statistical charts. Now, you can track historical process results as far back as 16 sprints! Plus, we offer trailing averages over three-month, six-month, and 12-month periods to help you grasp how your time-based statistical results are evolving over these key product development timeframes.
An updated view of project statistical charts, going back as many as 16 sprints with trailing trends.
You can now supply Bloomfilter with complexity estimates using either the “Story Points” or the “Story Point Estimates” fields in Jira.
We can now interpret either “Story Points” or “Story Point Estimates" fields from Jira
We removed the “green band” from statistical charts on the Projects page to eliminate confusion caused by wide band ranges, resulting in cleaner, easier-to-read bar charts.
Clean, easy-to-read statistical charts on the Projects page.
Non-customer facing
We’re making Bloomfilter’s data picture more dynamical. We successfully migrated Cycle Time, Lead Time, and Reaction Time to a task graph, which is more aware of real-time than static date stamps in Jira and GitHub.
What we fixed
This sprint, we tackled a number of minor fixes, including the consistency and performance of our project board dropdown selector, our project statistical chart labels, the historical calculation of cycle time, and tightened syntax in our welcome email.
We’ve previously talked about our Burn Confidence band, which measures team performance as compared to expectations in a static representation. However, the development lifecycle is constantly changing and requires teams to respond dynamically. We are excited to announce that our latest update to the Burn Confidence Band transforms the band into a dynamic forecasting tool based on a rolling sequence of burn data from past sprints.
We’ve previously talked about our Burn Confidence band, which measures team performance as compared to expectations in a static representation. However, the development lifecycle is constantly changing and requires teams to respond dynamically. We are excited to announce that our latest update to the Burn Confidence Band transforms the band into a dynamic forecasting tool based on a rolling sequence of burn data from past sprints.
In Sprint 22, two key features are in User Acceptance Testing (UAT): a new Summary page as the app home screen and an update to our frequently used feature, Sprint Performance.