Memorize this Array Time Complexity Cheat Sheet!

preview_player
Показать описание
dynamic programming, leetcode, coding interview question, data structures, data structures and algorithms, faang
Рекомендации по теме
Комментарии
Автор

Master Data Structures & Algorithms For FREE at AlgoMap.io!

GregHogg
Автор

If you can increase the size, that is a list, not an array. Arrays are of fixed size. I hate to be that guy, but it's an important thing to have your terminology right when you're trying to impress people at a job interview.

bobmechobob
Автор

If order of elements doesn't matter, then removal of an element can be constant time. Simply keep track of the last element in the array, and when you need to remove an element in position i, simply copy the last element into position i, and then decrement the thing looking at the last element.
If order does matter, then you should still move from back to front, since once you've reached the element to be deleted, you don't need to iterate through the rest of the array (this doesn't influence the Big-O value, but remember Big-O doesn't explain how fast aj algorithm is, it explains how it scales. There's a difference).

ToBeardOrNotToBeard
Автор

If you want to sort an array with very few keys compared to elements, you can use counting sort. If you have millions of identical decks totalling N cards, sorting them is in O(N). The algorithm is trivial:
1. Setup 54 buckets, one for each combination of suit and value
2. Put each card in the corresponding bucket
3. Draw each newly sorted deck by pulling cards from the buckets in order. It doesn't matter if you mix cards from one deck to another, since they are interchangeable.

Of course this doesn't count as a sorting algorithm according to the traditional definition, where your only tool is an oracle that tells you which of two elements is greater than the other. You don't get to know in advance the sorted order of every possible key (or you do but you'd rather not buy terabytes of RAM).

marc-andreservant
Автор

"inserting at random index is O(n)" while in theory true in praxis it's often O(1) since your n in praxis will rarely exceed a value where a memcpy is more than a single operation. Note: if you intend to point out now that O(n) is for arbitrary large values of n and not just small values of n, then yes, you're theoretically correct, but then almost none of the algorithms actually match the description since computers can't actually handle arbitrarily large n's.

TheApeiros
Автор

This was helpful, so I dropped a like! Thank you!!

SherlockMoes
Автор

Is understanding how each data structure work that hard? There only like 4 in total: array, deque(stack/queue), heap(p.queue), binary tree(set/map).
Edit: Forgot linked list.

quanghieulee
Автор

Cool. But can't some of this stuff be done in parallel? For example, copying seems to be O(1) for the first few tens of millions of elements on a gpu. Does this time complexity stuff assume you can only do one thing at a time?

import torch
import time
import matplotlib.pyplot as plt

q = []
for i in range(1, 100_000_001,
a = torch.arange(i, device='cuda')
t0 = time.time()
a.clone()
t1 = time.time()
q.append(t1 - t0)

plt.scatter(range(len(q)), q);

mk-ckor
Автор

Cant we just move elements in parallel in which case it's constant time?

joefuentes
Автор

I dont feel this should be "memorised", it should come natural with practise

pratikdey
Автор

Hey Greg, Are there any resources you would recommend for DSA in python

brofist
Автор

ah yes a complexity cheat sheet for the very most basic operations for one of the most basic data structures.

spacedoggo
Автор

man I just started java... that doesnt mean shiet to me... is that SOLID? I've heard that.. but heck that's advanced

svetllinganev
Автор

Just because it's true, doesn't mean its sensible to think that way. You should always keep in mind that the worst case runtime is no less than O(n) which is real slow. So always just use a linked list structure when you don't need immediate access to a specific element

me.isEmpty
Автор

So I have started to do both DSA ( LEETCODE ) AND CP from codefoeces.... I'm using c++ for CP....What i observed is that most people do DSA in python....Should i do DSA in c++ or python

HarshanJeevanantham
Автор

A good tip - If you ever have to add multiple items at the start of an array, it can often be faster to reverse the array, append everything at the end and reverse it back.
That's because you when you add to the start of the array, you have to shift all the items in the chain for each addition.

shapelessed
Автор

Just keep in mind list in python is not array

NoProblem
Автор

Warning: incorrect information below, corrected in the comments.

Adding an element to the end of an array is on average O(log(n)) as the array has to grow which requires copying of all elements, which is amortized by a clever strategy.

So taking amortization into account, adding elements is O(log(n)) and ignoring amortization (or with a primitive implementation) it is O(n)

Pilikio
Автор

You shouldnt memorize, understand instead!

tanmang