The Only Type of Testing You Need

preview_player
Показать описание


Hello, everybody. I'm Nick, and in this video, I will introduce you to my newest favourite testing technique called Snapshot testing, using the nuget package, Verify.

Don't forget to comment, like and subscribe :)

Social Media:

#csharp #dotnet
Рекомендации по теме
Комментарии
Автор

I remember using something similar a while back when we had a massive amount of APIs to migrate from a legacy version to a "modern" new version, we used this to ensure that our migration didn't break any APIs.

rimailias
Автор

I find that snapshot testing to be incredible for integration tests, as you probably want to have more coverage of what has changed. For unit tests, I find that snapshot tests loses context over traditional assertions as those assertions can provide greater meaning to why the logic is like that (e.g. vs "countryCode": "y").

haydensprogramming
Автор

In my opinion this does stand in conflict with TDD when used for unit testing. In TDD the test is written before the implementation so when using this library the text file with the correct content has to be written from scratch which is not that convenient.

gneaknet
Автор

I was under the impression Nick already had a video about this and spent an hour looking for this like a month ago! Typically I'd maybe agree with other comments - it's a bit of an overkill. But when you face a situation where you need it - it's amazing.

ricardsiliuk
Автор

Is there no way to set the _verifySettings globally or per test project? Feels busy having to pass it on every call.

subbyraccoon
Автор

I feel like I'm missing something about this approach. The feeling I'm getting is that what this package does, at a high level, is just move what you are asserting from source code and puts it into a text file. This package feels like something I should fine useful but, after watching the video a couple times, I'm not seeing the problem that this solves. What could I be missing?

logank.
Автор

Years ago I built a "golden-ticket" concept in my testing, with a json snapshot of a successfully generated/saved record. Keeping track of the golden-tickets became an accounting headache. I love this package to handle that dirty work for me. Great demonstration, Nick! Downloading now.

michaelwinick
Автор

To me it feels like you only moved the Expected result from code to a file.
Is there any other differences/advantages?

SinaSoltani-tfzo
Автор

Snapshot tests are great for APIs to ensure no breaking changes.

marikselazemaj
Автор

Super helpful to verify images. Just need to add a threshold to pass on insignificant differences like timestamps, etc

sergeyborisov
Автор

This package is also absolutely amazing for testing source generators, as well as EF Core generated queries. The first one is self explanatory I think, but the latter one also can have a big value - namely you can keep updating EF core and not be afraid of the unexpected query changes, that might need a closer retesting. Or you can easily get a generated query for a review, and acceptance by a DBA.

Personally I wouldn't ditch all "traditional" unit tests in favor of snapshot testing, nor all assertions in favor of Verify, but there is a big space for that type of testing within the framework for sure.

DemoBytom
Автор

Seems like a solution that makes the test less readable to a problem that doesn't exist.

greghanson
Автор

I once created a PR for Typescript, with some signature changes to the type definitions of certain array methods, and that part relied heavily on snapshot testing. It made me experiement with snapshot testing in other situations. While I wouldn't use it for unit testing, I've tried using it for integration testing with success, as well as for regression testing database schema migrations.

I even created my own little testing tool for testing migrations for SQL Server, where it would retrieve the schema information from the test database, store the schema in the "verified" file (on for each migration step), and upon each test of an up- or down-migration, it would compare the retrieved schema, with the verified one, thus ensuring no non-verified changed had snuck into the database migration scripts.

impero
Автор

Snapshot tests are great but they have a specific use case. They are great for testing the boundary interfaces of your system. Think API responses, Stdout on console apps etc. They help to both ensure breaking changes are recorded in the commit diff, and that the system is functionally identical from the user's perspective. They should be used for integration or end-to-end tests.

DryBones
Автор

There are certainly specific good use cases for this type of test, esp. for verifying message body, object properties, generated output, etc.
But I do not agree this is the only type of test we need. This is just an over exagerated title.
Also, im not sure if there is something that make your test that slow (even not first run), or the lib that serializing the object into the file that makes it that slow.

nickwong
Автор

This would be very useful for writing Characterization tests since you often don't know what a output will be until after you run the test for the first time.

MajeureX
Автор

This seems useful for doing things like testing your API doesn't have breaking changes. It however assumes you manually check the received file to check if the data you received is expected, I can imagine developers making mistakes there, especially if the files are big and complex. The advantage of writing unit tests yourself is that you actually think about the expected result and assert your code against that assumption. In that sense it's shares some similarities with using AI to write tests. Sure, it speeds up the proces, but if you generate a test (or a recieved.txt file in the case of verify) using wrong code as input, the test is worthless. It definitely has some utility, but I'd be careful where I'd use it.

bartvankeersop
Автор

Another very underrated way of testing is property based testing.

Basically, instead of testing that a given input has a given result. Check that regardless of input the result conforms to general expectations.
For example if I wrote a sorting algorithm, instead of doing this (simplified)

var input = [8, 0, -20, 400]
var result = sort(input)
Assert.Equals(result, [-20, 0, 8, 400])

Instead, i could test that the sorted result has the same number of elements, or that every element in the input is in the result. Or that the elements are actually in ascending order.
Which are all necessary properties of a working sorting algorithm.
It's basically identifying the properties of whatever you're writing and then writing down those expecations as executable checks. Then you can generate as much random input as you want and verify that those things hold.

The advantages are that its much less code, and more clearly communicates intentions. And its also more resilient to implementation changes.
This approach does have some requirements, like your input needing to be pure data, so that it can be generated automatically and procedurally, and the functionality being pure.
However I would want my code to look like that anyway regardless of if I used this testing approach or not.

mkwpaul
Автор

Interesting package, but not sure how we would implement it. How would we go about using it with faker or other random data generator in builders? We use them extensively to differentiate data in lists easily (without having to explicitly define everything). I don't really want to only move the assertion logic to a verify configuration.

gabrielgm
Автор

This is the best new thing I have come across in quite some time. Thanks Nick.

tarquin