Measuring Software Delivery With DORA Metrics

preview_player
Показать описание
If we want to do a better job of software development, we need some way to define what “better” means. The DORA metrics give us that. So what are the DORA metrics and how should we use them? They provide measures that evaluate the quality of our work and the efficiency with which we do work of that quality. So good scores on these metrics mean that we build better software faster.

In this episode, Dave Farley, author of "Continuous Delivery” and “Modern Software Engineering” describes how we can apply these measurements to drive software development to deliver on this state-of-the-art approach, but also explores a few of the common mistakes that can trip us up along the way. DORA stands for DevOps Research & Assessment and is now a group at Google focused on measuring SW dev performance using scientifically justifiable research and analysis techniques.

_____________________________________________________

📚 BOOKS:

📖 "Continuous Delivery Pipelines" by Dave Farley

📖 Dave’s NEW BOOK "Modern Software Engineering" is available here

NOTE: If you click on one of the Amazon Affiliate links and buy the book, Continuous Delivery Ltd. will get a small fee for the recommendation with NO increase in cost to you.

_____________________________________________________

🔗 LINKS

-------------------------------------------------------------------------------------

Also from Dave:

🎓 CD TRAINING COURSES
If you want to learn Continuous Delivery and DevOps skills, check out Dave Farley's courses

📧 Get a FREE "TDD Top Tips" guide by Dave Farley when you join our CD MAIL LIST 📧

-------------------------------------------------------------------------------------

CHANNEL SPONSORS:

Рекомендации по теме
Комментарии
Автор

For me SAD AF is how I feel after a lifetime of software development without these approaches. Well better late than never. This video series is an absolute goldmine. Thank you for making them.

robertgrant
Автор

Really liked the statements in the video.
"There is no silver bullet. Software Engineering to too complicated to do well.
DORA metrics are trailing indicators, They tell you how you did, not how are you going to do.
Approaches like Continuous Integration, TDD and Continuous Deployment predict good scores on these trailing indicators."
Vikas

softwarearchitecturematter
Автор

I've sort of realized this for a while, and this really helps to put things together. I also realize this requires self-organizing teams compensated with bonuses on a bell curve. There must be disciplined testing and a strong NCO corp to lead teams. Requirements must flow from the top through a competent officer corp operating on the Prussian model (results directive based). Also, technical debt should apply a negative multiplier to bonuses until rectified, and this means there must be siloed bug testers and report verification. All of these structures benefit from scale, and I intend to exploit that.

screwstatists
Автор

I feel like this is very true, but within the context of software development.
In a larger context, we're trying to effect change of some sort. Building software is a means to that end, but it might very well be that writing software - however well done - is not the most effective way to bring about the change we seek. Therefore I feel it is important to attempt to measure the impact of our software on the situation, such that we can be sure that our development is bringing about the desired change. This is probably very hard and very context dependent.

RoelBaardman
Автор

I have the audio version of your Modern Software Engineering book. Great book and easily became one of my most recommended books to developers.

EldonElledge
Автор

Every metric which becomes a benchmark for performance (and productivity is also a sort of performance) loses its ability to be a good metric. This is one thing I learned from the social sciences. The key argument here is: If you rank people by a metric - aka benchmark them - than they try to optimize their behavior towards the metric, but the metric is usually an indirect measure of something you are really interested in. So the optimization by the people may not be an improvement of the something, but only of your metric. This also an issue in university rankings, school grades, people competing to have the fastest graphics card (people optimize their code in a way that the metric gets better values, but not to be more performant overall).

reinerjung
Автор

I have 2 followup questions. I have been a big fan of DORA metrics for years! When I adopted them, it allowed us to move from mid level to elite in many metrics (others just high), and still gives us actionable information on how to work better. Before we were expecting velocity to tell us how well we were doing 😑

What is the relationship of the DORA state of DevOps and the Puppet or BMC State of DevOps?

What are good leading metrics for team performance? Or at least decent ones?

Автор

Brilliant content as always. The scenario you describe is exactly the situation my current organisation is in. It very neatly describes my reasons as to why I am leaving the organisation.

thedazman
Автор

It's good to know you're accredited.

bryanfinster
Автор

Short and on point as always, thanks!

orange-vlcybpd
Автор

I've seen people object that these correlational studies aren't controlling for the relative difficulty of the product that different teams are working on. So, it might be the case that "elite" teams - those who deploy frequently with fewer regressions, are, on average, working on very simple things, whereas "low performing" teams are, on average, working on incredibly complex problems. Is there any effort to control for this sort of thing in the statistics?

michaelrstover
Автор

Great content as always. I have been using the DORA metrics as key indicators of team’s growth in terms of efficiency and quality of their delivery for some years, and I 100% agree with your comments and the overall point of the video. I believe my experience confirms your statements.
Although, I kind of missed more on your opinion about the correlation between the DORA metrics and delivery performance. Why does this correlation exist?

Автор

Just when I was looking for something to listen to in the car!

jangohemmes
Автор

Great watch as always! I watched a presentation the other day re Flowmetrics....is it a case of either/or with Flow/DORA or could they be used in conjunction? I seem to remember some leading indicators as part of the Flow demo....

simonlee
Автор

Ah yes. And always a very good video. And thanks for all the source references.

reinerjung
Автор

Unfortunately, the DORA report does not tell the true story or there is no proper guide how it should be measure? My company recently adopted this DORA report and every project share their DORA report, ... Unfortunately, it was evaluate by the team or 'DevOps' engineer role in their team. Surprisingly most project is above 70% or even 80% which I highly doubt because many of them don't really write proper test coverage, is still using TBD, they work in silos (they have FE and BE team), most if not all of them understand DevOps as just between Devs and Ops, and they specifically have a role for DevOps engineer.
Recently some project shared their 'DevOps' practice, tools and pipeline. After asking few questions, I found that they're having 2 branch one DEV and another MASTER, and they're also using TBD... That means, a feature must first merge into DEV to be manually tested before they can go to MASTER. If there is a hotfix needed, they could take up to 1 week to deploy to PROD. Furthermore, their feature branch could sometimes last more than 1 sprints. And for feature branch, they practice 1 Dev per branch. I could see so many problems and disaster to work in such project and you could see that the team or their 'DevOps' engineer still manage to evaluate their project above 70%.
It's due to the questions are answered by the team and it depend on how they interpret.
They could be commit daily to feature branch, in this case, they would state that they are doing Continuous Integration (even though they don't really practice it because from my experience working with people working in feature branch, they tend NOT to work on small changes and would usually not commit frequently; furthermore, when they need to resolve conflict, you would see they would further delay the commit).

Unfortunately, the management only look are the report and don't truly look into the details or meanings, and they were actually proud to push this to every project; we also have other kinds or report or evaluation and it's definitely doesn't reflect the actual state, the problem is the team themselves evaluate and give the score instead of someone who is well versed in such topic to perform the evaluation; furthermore, when they setup the group that bring such topic, they don't evaluate who is right, and people just volunteer mainly due to extrinsic motivation and someone who is only rely on numbers.

cubkgmq
Автор

Unfortunately for any metric, there is usually a lag between undesirable behavior and the negative consequences. This allows many poor performers to be long gone before the damage is done. For code that might mean poorly written, but up to spec, code which isn't discovered until someone wants to make a simple change and the whole thing comes crashing down. Same principle applies to construction crews and heads of state.

bobthemagicmoose
Автор

How can we use these metrics when the projects are brand new and during all the setting, we cannot release to the public because the basic functionalities take several sprints? Even if we split the basic functionalities, they are not shippable within one single sprint

KarolGallardo
Автор

SAD AF is your emotion when you have to debug a badly designed system.

orange-vlcybpd
Автор

At times you do want to hack something together thats really not maintainable.

For insurance in papers on ml ots quite common for the code to be throwen out after the paper is written.

Which can make for ridiclously awful code since u r trying to deliver as quickly as possible with 0 ne3d to mainta8n later

nevokrien