Sorting Algorithms: Quicksort

preview_player
Показать описание
Developed in 1960 by Tony Hoare, Quicksort is a divide-and-conquer sorting algorithm. Quicksort is shown here sorting arrays of 25, 100, & 5000 elements.

"CGI Snake" written & performed by Chris Zabriskie
Рекомендации по теме
Комментарии
Автор

Usually watching the algorithm run in a smaller set at slower speed helps me understand how it works better, but in this case the slowed down small set confuses me and i understand it the best by watching the algorithm run through a larger set of data. Interesting video as always, though.

ewhiteman
Автор

Ironically, this sort actually has a really nasty worst-case runtime: if it gets unlucky with pivot choices then it takes O(n^2) operations. Theoretically you can use a system of approximate medians or maybe a heap (taking O(n) time at each level) to choose a pivot guaranteed to be good enough to get O(n log(n)) time, but that destroys the good constant factors this algorithm is prized for.

An optimisation you are missing here is to partition by working inward from the ends rather than from the left.

SimonClarkstone
Автор

I love how the music is timed to the sort

jacobsiron
Автор

I always felt that quicksort was a bit... egotistically named. Like, yeah, it's pretty fast, but in my own experiments, mergesort tends to go a bit faster (even in the best of cases, quicksort can't consistently beat mergesort) and the variant of comb sort I went with (gaps shrink by a factor of 10/13, switch to insertion sort once gap becomes 1) typically beat out both of them. Quicksort really should've been named pivotsort, as that's the real defining concept of the algorithm.

Many will tell you of quicksort's terrible worst case performance, where it starts to resemble bubble sort on how poorly it scales to large lists, but they might not tell you that this (highly unlikely if using random pivots, but still possible) scenario has another risk: crashing the program. I was using the first element as the pivot and decided to test out how my various algorithms handled already-sorted lists. I gave it a list of size 1, 000, 000 and the program crashed. The reason was because of quicksort; since it picked the lowest element every time, it was creating a new call in the stack for every single element and wouldn't complete any of them until it reached the end - a stack of 1, 000, 000 nested method calls. Obviously, it crashed long before it got there. Previously, I had considered using random pivots as an unnecessary extra step, but now my personal algorithm picks a random pivot. Mergesort has no such worry despite also being recursive, since it will always subdivide lists into halves and does not rely on any luck that might make one half significantly smaller than the other.

In other words, yes, quicksort largely lives up to its name, but it is something of a high-risk, high-reward algorithm. Other algorithms might be slower, but they might be more consistent and might have other upsides that are necessary depending on use-case - sometimes, speed isn't everything when sorting.

Oh, yeah, and my first attempt at writing quicksort went poorly. I tried to go with left-right pointers, but something went wrong and I couldn't seem to figure out why it wasn't working. When I realized that left-left pointers was a valid and simple direction to go, I shifted focus and was able to quickly put together a working algorithm. Apparently, left-right pointers are more efficient, but I'm not sure I want to jump back in to that strategy again.

SuperfieldCrUn
Автор

on 3:31 when it "cuts off the 7th rectangle" you can see what does it mean to get *bad* *median* . splitting the task in two smaller tasks gives very inequal pair.

DiamondSane
Автор

This is not the most natural way of coding Quicksort. It appears to be doing more data movement than necessary. It should scan backwards from the end of the range to find a smaller element than the pivot to exchange with the larger element that it found in the first part of the range. Instead it's scanning forwards with both scans, which means that it may end up moving some large elements more than once before they land in the proper partition. This is a bug.

You can see this effect most clearly in the 5000 item sort beginning at 3:00 in the video, where the partition slowly grows from left to right as elements are removed from the lower right and upper left and added to the upper right and lower left. The start of the lower left region stays the same, of course, so none of the smaller elements need to be moved once partitioned. But the left boundary of the upper right region doesn't stay the same, it moves along with the scan of lower elements, which means that elements initially partitioned in the upper region are constantly having to be shifted farther to the right as the upper region boundary moves.

dlwatib
Автор

I realize this is a rather old video. If you still care for this, because I care for the channel (parts of it, rather), here are my two pence: I would have wished for some more in depth explanation after each run-through. A quick highlighting of something that is easy to spot on each sample size, as in this case, for example, the characteristic "Spikes" in which the elements are sorted backwards.

klobiforpresident
Автор

Russian Sort Halves Visual Danilin
Русская Сортировка Половинами Визуально Данилин

AndreyDanilin
Автор

can you show me where the 'divide' happens? Is this a matter of taking an array of n elements and splitting it into m segments, then sorting each segment individually? I'm not seeing that happening in the video and would love some insight.

baconology