Intro to Algorithms: Crash Course Computer Science #13

preview_player
Показать описание
Algorithms are the sets of steps necessary to complete computation - they are at the heart of what our devices actually do. And this isn’t a new concept. Since the development of math itself algorithms have been needed to help us complete tasks more efficiently, but today we’re going to take a look a couple modern computing problems like sorting and graph search, and show how we’ve made them more efficient so you can more easily find cheap airfare or map directions to Winterfell... or like a restaurant or something.

CORRECTION:
In the pseudocode for selection sort at 3:09, this line:
swap array items at index and smallest
should be:
swap array items at i and smallest

Want to know more about Carrie Anne?

Want to find Crash Course elsewhere on the internet?

Рекомендации по теме
Комментарии
Автор

3:53 "N Squared is not particularly efficient."

There's no need to get personal...

CaptNSquared
Автор

I can't believe this is free online to watch and learn you guys are doing great things. I deafinetly want to be a computer scientist now

wheezyair
Автор

this stuff is so damn interesting to me...
but it makes me want to smash my face through a brick wall.

xMaverickFPS
Автор

In school I had to write a bubble sort in 16bit intel ASM. It worked, was efficient and was well documented. It got me an A+. Even like 15 years later, I am so damn proud of that piece of code. :-D

nohero
Автор

This series is a great addition to learning computer science, i.e. learning syntax, logic, algorithms.
It gives context to everything, besides, it's fascinating.

SilverMiraii
Автор

The good thing about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do.
~Ted Nelson

anandananda
Автор

Odds of a developer being asked to write an algorithm as part of a coding interview - 75%. Odds of same develop ever writing an algorithm in their job as opposed to reusing a system library - 10%.

cholten
Автор

"I laugh at your puny algorithms!" says Littlefinger as he transports instantly from Highgarden to Winterfell.

unvergebeneid
Автор

This video was incredibly well done! This lady is really great at teaching material in a clear, easy-to-follow manner.

pearlsswine
Автор

Small bug in pseudo code for selection sort at 3.11: third from bottom line should say "swap array items at i and smallest".

(Currently it says to swap items at index and smallest. Since index would be at the end of the array whenever that line is executed as it is after the inner for-loop, this would swap the last and smallest elements rather than putting the smallest element in its correct position)

paruby
Автор

I learn much more in 7 minutes here than 3 hours at university.

charntechakraisri
Автор

I'm amazed how much ground you're able to cover in less than 12 minutes.
To be fair, if you haven't heard most of this before you would probably need the video to be twice as long to get anything from it, but even then I would be impressed with how much was stuffed in there.
Keep up the good work!

danielgronbjerg
Автор

Just a few things to note about the "big-oh" notation discussed in this episode:

In industry, "big-oh" notation alone is what is seen when discussing algorithms. However, in academia, it is a bit more in depth. O(n) is instead used to represent an upperbound (in the WORST CASE, what is this problem or algorithm). Ω(n) (pronounced "big-omega") is used to represent the lowerbound (in the BEST CASE, what is this problem or algorithm). Θ(n), prounounced "theta" (without the "big") is used to represent the tightbound (used when the big-oh is equal to the big-omega, which is the exact running time). There is also small omega and small o, but those are rarely used.

Also, we only care about the biggest polynomial when we use this notation. So if a problem takes 5n^4 + 3n^2 + 1 to do, we just say it is Θ(n^4). We drop the coefficient and smaller terms because we only care about what happens when n is really really big.

You may notice that earlier I mentioned algorithm or problem when describing the notation. This is because the it is often used in academia to denote all of the algorithms that exist for a problem. For example, for matrix multiplication of an n x n matrix, we have Ω(n^2), since we know that we at least need to read in n^2 values. This problem currently has O(n^2.3728639) which is the running time of an algorithm created by Francois Le Gall in 2014.

MFMegaZeroX
Автор

"I'll CPU Later"

Now I think science _has_ gone too far.

asp-uwu
Автор

Did you just say "I'll CPU later?" GET OUT.

Just kidding terrible jokes are the mark of great computer scientists.

xXAkirhaXx
Автор

Holy crap, homegirls an algorithm herself. Slow down

hyees
Автор

If you're interested in algorithms, you might like the channel Computerphile too.

sjwimmel
Автор

Well, I guess I'm going to have to learn maths now.

tellingfoxtales
Автор

Thank you for mentioning alkhawarzimi because allot of us don't know him but we admire him

abdinasirawil
Автор

Easily the best explanation of Big O I've seen... better than my professor explained it last semester.

petershort