3 Key Version Control Mistakes (HUGE STEP BACKWARDS)

preview_player
Показать описание
Version Control is pervasive these days, and is fundamental to professional software development, but where does it go next? Git, often via platforms like GitHub, GitLab or BitBucket, is by far the most popular VCS, but its value is being watered down, and the next steps in software development, look to be ignoring this vital tool of software engineering. Without the ability to step back safely from mistakes, and re-establish "known-good" starting points from Version Control, we will loose the ability to make incremental progress, and with that loss, we also loose the ability to create truly complex systems.

In this episode Dave Farley, author of best sellers "Continuous Delivery" and "Modern Software Engineering" describes that essential role of version control, that we often ignore, and explores 3 ways in which we often compromise the value and utility of version control in our software projects.

-

🖇 LINKS:

-

⭐ PATREON:

-

👕 T-SHIRTS:

A fan of the T-shirts I wear in my videos? Grab your own, at reduced prices EXCLUSIVE TO CONTINUOUS DELIVERY FOLLOWERS! Get money off the already reasonably priced t-shirts!

🚨 DON'T FORGET TO USE THIS DISCOUNT CODE: ContinuousDelivery

-

BOOKS:

and NOW as an AUDIOBOOK available on iTunes, Amazon and Audible.

📖 "Continuous Delivery Pipelines" by Dave Farley

NOTE: If you click on one of the Amazon Affiliate links and buy the book, Continuous Delivery Ltd. will get a small fee for the recommendation with NO increase in cost to you.

-

CHANNEL SPONSORS:

#softwareengineer #developer #git #github #versioncontrol
Рекомендации по теме
Комментарии
Автор

One thing I've noticed throughout my career is a lot of teams are way too quick to throw away version control history. It seems nearly every company I work for, I end up having to argue against a plan that includes throwing away version history. This happens when switching version control systems (including ones that have an importer), or restructuring repos. I've done quite a bit of version control sleuthing on bug fixes where version history I am looking for is years or even decades old. Often times version control history is the only documentation that exists for knowing why some complex logic exists and if it is still needed or what it should do.

username
Автор

I use version control for basically everything these days. From writing letters, for my study notes and config files. To never worry about a mistake setting you back more than a few commits is a powerful thing indeed.

MrLeeFergusson
Автор

Version control gives you "undo" in software projects. No one would use a word processor without undo.

PhilmannDark
Автор

Where I work we tend not to do rollbacks to previous versions at least not in production. If a bug is introduced it can swiftly be tracked down to a specific unit of work and a specific commit that was made. This commit can be reverted, and a hotfix produced with that reversion. This now increments the minor version of the main branch & the production code. Given that each release could represent 100 separate JIRA tickets across maybe 5 teams, this more surgical approach works better for us than removing all the work that worked well in a given release.

Skiamakhos
Автор

A VCS should be taught as early as possible.Git has the advantage, you can use it offline. You can use Git on your phone. You won't use your phone for a serious project, but for toy projects it's a good tool. If you use Git for toy projects, where you can experiment, a Git disaster in serious projects is less likely.

Hofer
Автор

Thanks, great video. At some point I thought you were advocating for monorepos, which to some extend you do (and that's great), but then realized you want to address a few more general ideas.
The reality is that in S.E. most people don't understand Git and Version Control in general, only learn some commands on how to do some common tasks, which is counter-productive cause now V.C. appears to them as some sort of an "enemy", an obstacle that they have to overcome, instead of a tool that helps them maintaining their code. A S.E. that doesn't understand V.C. trembles in fear of having conflicts with its coworkers, while one that understands it, embraces those conflicts and if anything would like to see more. For example, you refactor a function in your branch and all its uses, but on my branch I introduce a new usage of the pre-refactored function; in a new file. We both merge into the "main branch", and Git will tell us both that everything is fine... of course the code will probably fail to even compile after that. If anything, I would want future Git to have at least that, understanding code semantics and giving us conflicts in situations like this, instead of treating code as plain text.
In a similar way, the monorepos idea is largely dismissed in favor of "per project" repositories. On my last job we were like 15 engineers and there were like 300+ repositories, cause the lead engineer there wouldn't understand V.C. at all and was over-obsessed into creating new repositories, most of the time copying existing repositories into new ones to make some changes there... madness, the complete opposite of DRY. Many companies are hiring DevOps engineers and then ask them to do what System Engineers used to do, sort out the runtime stuff, without having a say on how the SDLC should be improved.
Subscribed!

kozas
Автор

It is a shame that all the knowledge that could be shared always has a negative connotation, would love to see more positive videos on this channel

nardove
Автор

Very good points. My personal gripe with low code systems is that they often don't integrate well with version control. If the app's state is for example in binary or similar format, VC doesn't help much.

villekauppila
Автор

I only work in traditional coding environments, but I have used no-code systems a few times when helping out a friend with their webiste, and I'm also responsible for my housing community's website. The lack of version control in those systems has made me terrified and ready to pull out my hair every time I've used them. Each time something breaks I have to mentally step back each step of the way to figure out what went wrong, and if I make a big change and realize it wasn't good, tough luck. At least one of these frameworks has the feature of having "work-in-progress" changes that don't take effekt until you press publish. But the other framework doesn't even have that. Every little change is instantly published.

ersia
Автор

I'm not sure about git successor but I've used a lot of different version control systems and git absolutely could be improved on. It does a lot of things right. Probably the big one that git and other DVCS do is allow for easy branching and merging. A lot of prior systems didn't handle this well at all. Even SVN didn't have merge tracking for a long time. But there is a ton we lose with git too. The command-line commands are pretty random and arbitrary. It feels like a random pile of shell scripts (which is partially correct). There is no way to manage which branch should integrate into which. No permissions. No file locking (yeah lfs is a hack). No bug tracking integrations build in. You have to git stash way too much to do basic things. Because a branch is a pointer, commits don't track what branch they were done on. Deleting a branch can delete history. So yes, git is an improvement from many other systems but it is also a step back. First we should get back what we lost and then we can talk about the future.

username
Автор

This is a succinct clarity of such a critical topic that we all too often take for granted.

esra_erimez
Автор

As a seasoned Endevor Administrator - Endevor is the main Software Configuration Management tool used on the IBM mainframe platform - fascinating video which I mostly agree with.

I'm from a modularized mostly "waterfall" school of software development life-cycle so the idea of incremental change, and being able to reliably fall back to a previous version, and testing from a known position (which may include known bugs, and feature lacunæ) is a key requirement as I see it

charlesgaskell
Автор

I absolutely agree. Thank you for sharing!

PovlKvols
Автор

Simply put but pure gold. Thank you Dave

nviorres
Автор

There are certainly scenarios where this approach can work well, but it appears more niche than presented here. Each micro-service should ideally have its own independent lifecycle. Grouping them all in one repository, presumably using submodules, for validation and deployment presents an interesting concept. However, I doubt this would scale well or offer much flexibility, especially with distributed teams or heterogeneous tech stacks. The fundamental challenge is governance over your application landscape, which should include system integration level version validation. This is only one of many valid approaches, and others might be better depending on the circumstances. For instance, using a dedicated testing solution that pulls packages from systems like NuGet and npm, validates them against a specific maturity environment, and provides clear reporting on the health of that constellation might address your concerns in a more flexible and robust manner. While you could apply the Git approach to such a system, what would it add? It would likely complicate the incorporation of software not under your direct source control, such as external packages, off-the-shelf products, or SaaS services. Overall, this solution seems too idealized for practical application in diverse environments.

TeunSegers
Автор

1. Using Known Good Set (KGS). It's often done outside VCS by pinning versions of any parts
2. Sometimes we did autocommit interval for non coders. It's not a matter of coding but of education
3. The main word here is gathering documentation. It's not Magic. Also, a part of education.

So it's not really about version control but more how to be efficient outside version control.

polovne
Автор

The key to define the combination of versions is a shell-repo: a repository that *only* contains subrepositories, and maybe some pipeline definitions. As a concept that was proposed by Mercurial, and to my experience it is the only way to do the exact linking subrepos provide and retain your sanity on the long run.

ArneBab
Автор

Building incrementally does not allow infinite scaling: the small steps we build require coordination and long term organization which incur a fixed cost on every later step. To sidestep that, we try to build newer languages that avoid the small missteps of old but make new mistakes and require relearning how to best solve other problems. So we cannot actually avoid the cost of larger systems. All we can do is to turn it from a linearly rising cost to something that more likely is a logarithmically rising cost. And by getting used to how we do things, we can turn them from a cognitive load into a habit that has much lower cost, but is much harder to change.

ArneBab
Автор

I agree with every point you except the last part when you talked about AI prompts not being able to generate iterative improvements. The issue in that scenario is not the tool, it's the user. An engineer needs to have prompt engineering skills. Once you begin a conversation, you need to be able to work with the AI model to make small improvements. Using the tool in the same way that humans naturally program by themselves results in the best outcomes. Whereas using the tool to create everything from scratch in one go, that's asking for too much from the tool.

yrtepgold
Автор

To make things work together, I think that it helps to think of what you are versioning as an ENVIRONMENT, everything you need to make the "system" work, not just code, but also your Jenkins, and Ansible scripts. Use a branch to build an environment from dark, and use tags to provide version control. The trunk then becomes the complete serialized image of your production environment. "No junk on the trunk".

markrosenthal