Where Agile Gets It Wrong

preview_player
Показать описание
Agile software development is widely misinterpreted. The values expressed in the Agile Manifesto are just as true and important to adhere to today as when they were written over 20 years ago, but in practice agile as an approach has made many missteps. One of these is the assumption that self-organising means that everyone gets a veto on every idea, but this is not sustainable in an engineering approach to software. the agile methodology is grounded in sensible pragmatic software engineering, but for that to work as a development process it must be adopted by a team and doesn’t work if every individual decides for themselves. We must find ways to evaluate ideas as a group and agree on which are good and which are bad.

In this episode, Dave Farley, author of best-selling books “Continuous Delivery”, “Modern Software Engineering” and “CD Pipelines” explores the idea that agile made a mistake by being too equivocal, and describes one technique, Carl Sagan’s Baloney Detector, that we can use to identify the bad ideas and how that applies to software.

-

⭐ PATREON:

-

🖇 LINKS:

-

👕 T-SHIRTS:

A fan of the T-shirts I wear in my videos? Grab your own, at reduced prices EXCLUSIVE TO CONTINUOUS DELIVERY FOLLOWERS! Get money off the already reasonably priced t-shirts!

🚨 DON'T FORGET TO USE THIS DISCOUNT CODE: ContinuousDelivery

-

BOOKS:

and NOW as an AUDIOBOOK available on iTunes, Amazon and Audible.

📖 "Continuous Delivery Pipelines" by Dave Farley

NOTE: If you click on one of the Amazon Affiliate links and buy the book, Continuous Delivery Ltd. will get a small fee for the recommendation with NO increase in cost to you.

-

CHANNEL SPONSORS:

#softwareengineer #developer #agile
Рекомендации по теме
Комментарии
Автор

Maybe it's just me, but the title doesn't seem to match up with the content of the video.
The content is about how to identify bad ideas whereas the title implies that the content is about problems with Agile.
The description also does not match the content of the video.

Rope
Автор

I'd love for Dave to do a video where he is pair programming with someone and leading them through how to "don't branch" for the first time. All the theory is awesome, but for some people it doesn't click until they see a practical example.

AndrewEddie
Автор

The real challenge with these ideas is getting any team to agree to try them out. For some reason all the data and logic in the world often just isn't enough to pull people out of their comfort zone. Being wrong and doing things badly is easier than learning and changing.

rthomasv
Автор

Next episode for you - and I do mean this seriously: How saying "not all opinions are good" gets you branded as "not a team player" and the insecurities of leaders in our industry nowadays.

lost-prototype
Автор

The key step for achieving Continuous Delivery is the automation of the deployment pipeline beyond running unit tests. When the pipeline also automatically runs integration tests and a set of acceptance tests, and then deploys the software if all is well, that is when you start getting the benefits of this style of working.

Most teams are scared to automate the final parts of the pipe line, and have some sort of manual QA holding things up. Those teams will never see the benefits of QA. And most will have an over-worked Architect or QA team struggling to keep up with the changes.

TheEvertw
Автор

Having read Dave's books and watching his videos, I can say from my first hand experience implementing these ideas have demonstrated significant improvements for the projects I'm involved with. It takes time for members to digest and implement but when you do you achieve process improvement.

esra_erimez
Автор

My current interpretation of why developers argue against things like TDD, Pair programming or CI is that those arguments are just a justifications. They aren't really trying to argue to work better. They are finding excuses, so they can stick with processes and tools that allow them to work in isolation, without having to engage in intense collaboration with others within their teams. For many of the devevelopers, they optimize their process for minimal interaction first and for efficient software development far second.
It is rare to find developers who are willing to sacrifice their own comfort and mental energy, so they can achieve more efficient team-focused and long-term results.

RFalhar
Автор

In recent years I see that the principle of "standing on the shoulder of giants" is more and more replaced by "it's old, it's wrong. I know it better". In a not mature way, . Not only in the area of software engineering but more as a Zeitgeist thing.

miro
Автор

I agree that for 80% of commits feature branches should be avoided. But I think there are times when making complex changes, a feature branch is necessary. I say this with caveats: a) merge/rebase from master into your feature branch regularly. b) if while working on your feature branch, you have a small isolated fix/refactor, then get that committed to master, and then merge/rebased into the feature branch.

GaryvanderMerwe
Автор

This channel is a cure for co-worker's ignorance at my workplace. Thank you, Mr. Farley!
Sadly, there is no way to teach people who don't want to learn.

snowy
Автор

One of the best episodes, if not the best in quite some time IMO. Not because I didn't like the others, but because it's tackling such an important and non-discussed topic: biases in our industry. But in the end, it's reality that matters and who delude themselves are simply more probable to fail

SalaHUN
Автор

If I am anything to go by, one reason people might shy away from CI/CD is the fear of failing quite publicly. A lot of us don't know it, but we were taught to be ashamed when we were wrong about something. While I wholeheartedly agree that failure is innovation's best friend, and will absolutely urge others around me to shake off the fear of failure, I find it quite hard to shake myself.

praecorloth
Автор

'Someone you despise...' And then shows a photo of the crappy Steve from Apple! 😂😂😂 Classic! 👌

jebotipasmater
Автор

18:15 - A succsessful merge is just that. It does not validate that the final result satisfies business needs. Nor does trunk development or pair programming. What validates a solution are the tests run against that codebase, no matter what the branching/merging strategies are. Are the tests covering all business cases ? Are the tests failing ? If yes, you go and have a look because you broke something. Maybe the break is intentional (requirements changed) or not.

ForgottenKnight
Автор

The problem with tdd, is that inorder to have it, you need to have software designed and built from the start to support using or isolating any portion of the code without starting the whole thing, if you're dealing with legacy software code that doesn't allow injecting alternate code depenencies or software that relies on multi threads, it become much harder to do.
Natrually you can try to refactor exiting code to enable tdd, but that means change existing code which would require the old process of manual testing, not to mention approval from manamgment who may not see the same things the same way you do, it's like a cycle you're stuck on.

emaayan
Автор

I recently left a team because of the frustration of not being able to merge into `master`. Twice I had to spend two weeks of merging, catching up with other people, remerging, waiting for approval from the Architect, etc, and hoping to find a moment when nobody else was checking stuff in. At one point I got angry and demanded my changes got accepted without review and nobody merge anything into master until I got my changes merged into it. That was a 12-man team heavily using feature branches. I did stay until I felt I had done enough to fulfill my contract, then quit.

I am all for committing directly into main (after some sanity checks & local unit tests). These complex acceptance procedures give a false sense of security, but in effect only raise frustration and stress levels.

TheEvertw
Автор

I work with a lot of science and engineering code where we are building simulations for making medicine. I have been able to get branches down to about a week on average but I found when trying to shorten it from that it caused more problems. I think part of the issue is that many of the people doing development are not primarily developers. They are engineers and scientists that need to get a model built and they need experienced support and someone that can look over their code before it gets merged in. The other part is that when building a model it is often hard to determine what correct is. The branches tend to get more group discussion on if the approach being used to solve the problem is the right approach.

Immudzen
Автор

I love this. My biggest fight is agile teams equivocating way too much, especially with things like demanding users stories having hours estimates or pulling stories into a sprint before the existing stories have sufficient testing. I continually have stakeholders pushing these ideas and when I push back they throw the ‘dogmatic’ label at me.

donparkison
Автор

I don't think a video like this really helps in convincing the people who call you dogmatic, because I feel like you're not even aware of what their complaints are. If someone calls you dogmatic, and you pull out a video where you torture the scientific method to arrive at the same exact conclusion you were already convinced was true before you look *more* dogmatic, not less. Since you want to see the holes in your argument:

1. Some ideas might be dumb, but that doesn't mean they don't have any area of applicability: The earth might not be flat, but on a small enough scale, that's a perfectly valid assumption.

You might dislike waterfall development, but for example on certain environments it's pretty much *the only available option*, so trying Agile in those environments is just a distraction that wastes everyone's time.

2. Cherry-picked examples: You support your claims on how teams in successful companies have seen good results, without mentioning that other teams *inside* those same organizations have done *other* things and got good results as well. It's almost as if being a big company that can afford on paying extra for talent might have an impact on performance!

Showing teams in successful companies finding good results with these methods is just not enough. You'd need to show for example that they don't get good results with other methods, or that they at least get better results, and then *maybe* you have something. Then you just need to make sure that you can replicate that across companies, across teams, across projects, ... Not doing that is just cherry-picking.

3. Discrediting preferences: The problem with the argument being made is that on this topic *preferences are actually important*. I can *measurably* see in my work that I'm more productive when I have scented candles on my office. I am more comfortable with them and can stay for longer on my desk actually working instead of being distracted. That is *my* preference, and that same candle might be an annoyance to other people, so it's perfectly reasonable to infer that preferences (even silly ones) have an actual impact on performance.

Like in point 1, you're generalizing your conclusions without real evidence that they are general principles.

4. Aim to prove/try it: If you are being skeptical about your ideas, reports from other people saying they did not find the same results should be interesting and counted as evidence, not be dismissed with "you're doing it wrong". Doing that makes your argument on an unscientific one, since you're making it unfalsifiable.

theondono
Автор

Martin Fowler recently updated his article on continuous integration and statet that, basing on his knowledge, it's not suitable for projects with no fixed team assigned to it, e.g. open source projects. How do you feel about it?

ClaudioBrogliato