The DORA Community provides opportunities to learn, discuss, and collaborate on software delivery and operational performance. Enabling a culture of continuous improvement.

DORA Community Blog

March 2024 Community News

What's happening in the DORA community in March 2024

Continue reading...
DORA Community Lean Coffee

Testing All the Way Around the DevOps Loop

Lisa Crispin, an independent consultant, author and speaker, kicked off a community discussion on Testing.

Continue reading...
DORA Community Lean Coffee
Latest Posts

March 2024 Community News

March 18, 2024 by Amanda

Hello! This month, we will be discussing distribution of work. I am looking forward to the community sharing their experiences and perspectives on how work distribution impacts individuals, teams, and their efforts to optimize the distribution process. There are several great discussions happening in the mailing list, some links to active discussions are below.

DORA.community Upcoming Events:

Community Lean Coffee Discussions (1 hour):

The discussion starts off with a brief presentation, then we use the lean coffee format for the remainder of the discussion.

Work Distribution - March 21st: 16:30 UTC

We will kick-off this community discussion with an overview of the DORA research findings on work distribution, and how work distribution impacts software delivery performance, well-being, and organizational performance.

Speaker: Michelle Irvine leads DORA’s research into documentation and is a technical writer with Google Cloud. Before Google, Michelle produced physics textbooks for Ontario high schools and documentation for physics simulation software.

Metric Monday (1 hour): The discussion starts off with a brief presentation on metrics, then we use the lean coffee format for the remainder of the discussion. March 25th: 16:00 UTC

Upcoming Conferences & Events from the community:

DevOpsDays


In case you missed it:

Asynchronous Discussions:

On the playlist: Catch-up on the latest community discussions on the DORA Community playlist:

DORA around the world:

Content & Events

Thank you for your continued collaboration, we truly appreciate it!!

Smiles,

Amanda

Testing All the Way Around the DevOps Loop

February 22, 2024 by Amanda

Discussion Topic: Testing

The DORA Community Discussions start with a brief introduction talk, and then a discussion with the attendees using the Lean coffee.

Topics discussed:
How the DORA metrics can contribute with quality?
Recap from the community discussion:
  • DORA metrics focus on process. Lead time, deployment frequency, etc. primarily reveal process efficiency, not product quality.
  • DORA metrics can indirectly influence quality. A smooth process facilitates the focus needed to improve product quality.
  • Look for product-oriented metrics. Consider deployment failure rate to give a high-level picture of quality or supplement DORA with quality-specific metrics.
What about teams that do not have any testing specialists?
Recap from the community discussion:
  • Emphasizes the importance of focusing on the outcomes that the team is trying to achieve, rather than just the tasks that need to be completed.
  • Continuous learning is key. Teams should invest time in understanding different testing techniques to compensate for a lack of specialists.
When testing is everywhere, what is the role of specialized testers?
Recap from the community discussion:
  • Testers become quality coaches. They focus on teaching, mentoring, and improving the team's overall quality mindset.
  • Specialists bring new perspective. Their focus on strategy and the "big picture" of testing elevates the whole team.
  • Testers can unblock teams. They help with tasks outside of direct testing, bringing different strengths to the team.
How can we improve quality through observability?
Recap from the community discussion:
  • Observability tools can assist in identifying and diagnosing production issues, enabling teams to learn from and prevent future problems.
  • The use of metrics and tracking to identify areas for improvement was suggested.
How does non-functional testing fit in (scale, security, DR)? Does it get done with every iteration?
Recap from the community discussion:
  • Quality attributes mindset. Avoid the vague term "non-functional," focus on specific attributes like reliability and scalability.
  • Use visual models for planning, ...like the Holistic Testing Model, to ensure you don't miss important areas.
  • Make time for it. Consciously plan for testing non-functional aspects as early as possible. Leverage new tools. CI pipelines and production testing tools make this easier.
  • Treat operational requirements like functional features. Prioritize them on the backlog, with dedicated time and resources for testing.
There's an adage along the lines of - when everyone is responsible for something, no one is. How does that translate to everyone in a cross-functional team being responsible for quality?
Recap from the community discussion:
  • "Everyone's responsible" can fail. Without clear ownership, testing may get deprioritized.
  • Visual models help. Tools like the Agile Testing Quadrants provide a framework to discuss and plan testing coverage.
  • Assign clear responsibility. Make testing an explicit task within a development process, like code review. Someone needs to be accountable for quality.
  • Consider an enabling team. Some organizations have quality-focused teams that support and set standards for other teams (rather than individual testers within teams).

Open Discussion on Code Review Use Cases

February 8, 2024 by Amanda

Discussion Topic: Code Reviews

The DORA Community Discussions start with a brief introduction talk, and then a discussion with the attendees using the Lean coffee.

Topics discussed:
Suggestions around code review metrics- code review velocity, PR cycle time etc.
Recap from the community discussion:
  • The Need for Clear Terminology: The terms "velocity", "code review", and "inspection" are often used inconsistently. Participants agreed on the importance of using specific terms like cycle-time, response-time, or review-rate to improve clarity.
  • Metrics: The group emphasized defining metrics based on clear business goals and aligning them with company culture. CHAOSS recommended metrics.
  • Potential good metrics: time to merge, iterations, time waiting for a submitter action, how are the reviews distributed across the team, and time waiting for a reviewer action.
  • LinkedIn DPH Framework
  • Context matters - weekends, holidays, vacations, etc. can affect metrics.
  • https://github.com/lowlighter/metrics
Where is the value of code review on a high trust team?
Recap from the community discussion:
  • Knowledge sharing across the organization, learning the way others work, make code more homogeneous
  • Code review is more valuable when learning is the result
Code Review vs. Automated Testing
Recap from the community discussion:
  • Automating Reviews and Testing: using automated testing to streamline review processes, along with reviewing the tests themselves to ensure overall code quality.
  • Linters and Auto-Fixing: The value of linters for enforcing style and catching errors was emphasized, along with using auto-fixes within CI pipelines.
  • Testing can help save time -- make conversations more valuable: first, check the tests and then ask the more complicated questions
  • Adding automation & static analysis into CI pipeline is often called "Continuous Review" (Duvall, 2007). Adding AI into that was getting dubbed as "Continuous Intelligence" ca. 2018 or so (along with "Continuous Compiance")
  • Context matters -- age of codebase, team culture, team expertise, industry
  • Paul Hammant's blog - The Unit of Work Should Be a Single Commit
  • Trust and Test-Driven Development for People
AI replace a human in code review? When, where, and how?
Recap from the community discussion:
  • AI vs. Traditional Code Analysis: The potential of AI-powered code reviewing tools was discussed and compared with existing code analysis approaches. It was noted that AI reviewers could go beyond traditional tools, but it's important to see how they function in practice.
  • Code Written by AI vs. Code Reviewed by AI: The conversation highlighted the distinction between these two scenarios, with implications for code quality and the role of human developers.
  • Be aware of the AI training set
  • Human judgement is important

February 2024 Community News

February 1, 2024 by Amanda

Hello! Thank you for all of the wonderful discussions in January! It was exciting to meet several members for the first time. We discussed what teams are focused on in 2024, Metrics, and Applying DORA in your Context. If you missed any of the discussions, you can watch intro presentations, and read event recaps on the DORA.community blog.

We will continue to explore how team’s can improve their code reviews, with a presentation by Daniel Izquierdo Cortázar and community discussion Thursday, February 8th. On February 22nd, we will have a community discussion on testing, the discussion will start with a presentation by Lisa Crispin. If you are looking to connect with fellow DORA community members, this month’s community connection is on February 14th. We will end the month with Metric Monday discussions on February 26th. More information on the events are below.

Are you located in or near Boulder, CO? The DORA Advocacy team will be in Boulder on February 20th. Let us know if you are interested in a meet up by filling out this form.

A special thank you to everyone who has participated in the DORA research, and to everyone that voted for DORA in the 2023 DevOps Dozen Community Awards. The DORA report won!! 🎉 🎉 🎉

What’s New?:

DORA.community Upcoming Events:

Community Connections (45 minutes)

We break into small groups for networking, then spend the last 10 minutes sharing as a larger group.

February 14th: 12:30pm UTC & 21:00 UTC

Community Lean Coffee Discussions (1 hour):

The discussion starts off with a brief presentation, then we use the lean coffee format for the remainder of the discussion.

Open Discussion on Code Review Use Cases - February 8th: 15:00 UTC

Code review is essential in software development nowadays. This brings certain advantages to the organization producing code (it does not matter if this is done in the open or internally as proprietary), such as better knowledge sharing process, faster onboarding of newcomers, make the organization more resilient, or help hunt bugs faster in the software production chain.

This talk will share some public use cases and rationale to do code review with real-life examples for illustration as well as open the discussion for the rest of the hour together.

Speaker: Daniel Izquierdo Cortázar, CEO @ Bitergia, President @ InnerSource Commons Foundation, CHAOSS Governing Board. Daniel Izquierdo Cortázar is a researcher and one of the founders of Bitergia, a company that provides software analytics for open and InnerSource ecosystems. Currently holding the position of Chief Executive Officer, he is focused on the quality of the data, research of new metrics, analysis and studies of interest for Bitergia customers via data mining and processing. Izquierdo Cortázar earned a PhD in free software engineering from the Universidad Rey Juan Carlos in Madrid in 2012 focused on the analysis of buggy developers activity patterns in the Mozilla community. He is an active contributor and board member of CHAOSS (Community Health Analytics for Open Source Software). He is an active member and President at the InnerSource Commons Foundation.

Testing All the Way Around the DevOps Loop - February 22nd: 20:00 UTC We hear a lot of talk about “shift left” in testing. And, if you do an internet search for “DevOps loop”, you’re likely to see images that feature a “test” phase – sometimes on the left side, sometimes on the right. But software development is not linear, and testing is not a phase. Testing is an integral part of software development. Testing activities happen all the way around the infinite DevOps loop. Let’s look at examples of where testing happens, especially in the right-hand side of that DevOps loop. And, talk about who can and should engage in those activities.

Speaker: Lisa Crispin is an independent consultant, author and speaker based in Vermont, USA. Together with Janet Gregory, she co-authored Holistic Testing: Weave Quality Into Your Product; Agile Testing Condensed: A Brief Introduction; More Agile Testing: Learning Journeys for the Whole Team; and Agile Testing: A Practical Guide for Testers and Agile Teams; and the LiveLessons “Agile Testing Essentials” video course. She and Janet co-founded a training company offering two live courses world-wide: “Holistic Testing: Strategies for agile teams” and “Holistic Testing for Continuous Delivery”.

Lisa uses her long experience working as a tester on high-performing agile teams to help organizations assess and improve their quality and testing practices, and succeed with continuous delivery. She’s active in the DORA community of practice. Please visit lisacrispin.com, agiletester.ca, agiletestingfellow.com,, and linkedin.com/in/lisacrispin for details and contact information.

Metric Monday (1 hour): The discussion starts off with a brief presentation on metrics, then we use the lean coffee format for the remainder of the discussion. February 26th: 10:00 UTC & 20:00 UTC

Upcoming Conferences & Events from the community:

DevOpsDays


In case you missed it:

Asynchronous Discussions:

On the playlist: Catch-up on the latest community discussions on the DORA Community playlist:

DORA around the world:

Content & Events

Thank you for your continued collaboration, we truly appreciate it!!

Smiles,

Amanda

© 2023 DORA is a program run by Google Cloud. All content on this site is licensed by Google LLC under CC BY-NC-SA 4.0, unless otherwise specified. All Rights Reserved Privacy Policy