Faster database indexes (straight from the docs)

preview_player
Показать описание


——————————————————
00:00 MySQL documentation
00:58 Addresses table
02:17 Hash columns
04:13 concat vs concat_ws
05:40 Generated columns
07:31 The UNHEX function
09:34 Searching via hash
11:22 MD5 hash collisions
12:46 Functional indexes
——————————————————

💬 Follow PlanetScale on social media
Рекомендации по теме
Комментарии
Автор

As a student that has taken a dbms course in mysql and hated every part of it, I can absolutely tell that I would love it had it been even a bit like this video and series in general. Thank you Aaron, this content is genuinely incredible and is making me want to actually learn the details of mysql.

aegif
Автор

I don’t usually post on videos but the quality and the details on these is amazing! Thank you Aaron! Keep them coming!

remedix
Автор

I appreciate you talking about collisions because the one case that you do have where it happens would be a nightmare if you didn't plan for it no matter how small the chance may be.

transcendtient
Автор

I'm coming from your planetscale mysql db course and just gotta say - it's really amazing. Just the perfect level of depth, engineering knowledge and ease of use. I've been mostly the ml engineer guy through my career, now I'm switching to full-stack role and your course is really amazing for me to delve into optmizing the database shenanigans :) Thank you very much!

alexbalandi
Автор

I've been using MySQL for ages now, and yet you still come up with new things and ideas (like this one and the Geo box combined with haverseine calculation). Keep it up!

ayeshk
Автор

"remember if you hash passwords with md5, straight to jail" the delivery got me to laugh out loud, take your like.

jhechtf
Автор

These videos are really high quality. You are doing an excellent job

jimothyus
Автор

Lossy compound indexes seems such a good idea, that it should become a standard built-in feature at some point, e.g. a particular index type. You could then have index types like Postgres' GIST which will handle the deduping automatically.

simonhartley
Автор

As developer that always use mysql, you very did a great job explaining this topic. Thank you very much

ihzakarunia
Автор

I haven't learned such a useful trick from a youtube video in a long time, thx

gazsi
Автор

I saw this video and it literally solved a problem I had at work the next day. Stellar job Aaron!

mibennettc
Автор

As someone who likes reading the MySQL docs, printing them out and binding them is hardcore 😂 props Aaron 🎉

thereasonableprogrammer
Автор

You are awesome man:) I am self learning web development and you are such a nice source for wholesome and humouristic learning!

anderskozuch
Автор

Excellent Aaron! This technique can also be used to create a Cache key to store data in memory (or on disk), if you need to "roll-you-own" caching mechanism. Basically, sort/uppercase your arguments, then MD5 them together, and then use a hash array to map that MD5 to the response from the server. We do that with address lookups so we aren't constantly going to Google for geocode data (and eventually getting charged). Once we see an address we cache Google's geocode response and we never have to do the lookup again, thus saving money down the road.

rickmyers
Автор

My man this is is very greate and the niche you choose to post this video perticularly is very great the quality and everything I hope you make this as a series as this will help everyone a lot ❤

saadkhan
Автор

As someone who is just getting started with databases, this is incredible! Thank you A Aron

StingSting
Автор

There are some pitfalls when using concatenated rows as the key, especially when there is no separator, but also when columns are nullable.
What I'm thinking about is a row with column values 123 and 45 and another row with values 12 and 345, which would end up having the same concatenated value (when using just concat), and therefore the same hash. You could still end up in a similar situation when using concat_ws when some of the columns are nullable. We saw that the null values are completely omitted, so two sequential nullable columns could end up with the same hash, when a value is present in either one or the other column. There are even more ways where null values in some columns could result in the same input for the hash calculation. If the null values were converted to an empty string, which would create a separator representing every column, which would solve that problem, it would however be hashed the same as an empty string, or any other string you would use to represent null, but it's the best solution I can think of.
That's why I think it's really critical to do the full comparisons described at 11:50.

MonokelJohn
Автор

Super. I remember dozens of usecases, where this implementation would of saved the day.

Mika-se
Автор

Thank you for talking about collision, I was wondering about it even if I knew this wasn’t really an issue !

andyvirus
Автор

i didn't even use MySQL, yet i was pleasantly entertained by your video (i believe some of the knowledge is applicable to other databases as well)

thank you, keep up the good vid 💪🏻

budidarmawan
visit shbcf.ru