What is the everyday list, really?

preview_player
Показать описание
This video has a page on 0DE5 with exercises and resources

Chapters
00:00 - Intro
00:45 - Arrays
06:59 - Time complexity
14:47 - More arrays
19:01 - Linked lists
27:34 - Dynamic arrays
31:17 - Outro
Рекомендации по теме
Комментарии
Автор

12:11 I didn't expect to get jump scared by a theta today lmao Great video btw

josephx
Автор

New year doesn't really start until Kay uploads a banger.

MarianoBustos-if
Автор

babe wake up kay just dropped a new video

gluonsmx
Автор

Great video as always! A fun fact I was reminded of by Kay's assembly implementation of the linked-list. In lisp, the functions to access the head and the tail of each element in a linked-list are called "car" and "cdr", which are shorthand for "contents of the address register" and "contents of the decrement register". An early example of implementation details leaking into an API!

stewartperry
Автор

Point of pedantry: you can have zero cost 1-indexed arrays by having the pointer to the array point to an imaginary element just before the array begins, rather than at where it begins.

mrpocock
Автор

your animations are getting really good!!

dannisenson
Автор

I forgot to add that your videos are amazing. The visualizations are beautiful and very instructive. A video on Forth in your style could easily be the best Forth video on YT. There aren't many good ones unfortunately.

ArnaldurBjarnason
Автор

Looking forward to watching this, thank you for your work and research!

vaolin
Автор

Kay rolls out new video, I hit the like button and watch :)

hanpham
Автор

For a way to do Array Construction in constant time...
Assuming we are told to put specific values in it like list = [1, 2, 3]
I think you could do the construction lazily. Only "really" construct it when we actually have to read/write.
Perhaps for example you reserve the memory for the array, and use a variable to track how much of the
array has actually been constructed. If we read/write to a location that is past what we actually constructed,
construct up to that point and then do the read/write, and update the tracking variable. For the case of a
hardcoded array literal, to know how we need to construct during the read/write... at compile time we can
make a copy of what the array should look like when constructed, and refer to that as needed during runtime.

Vaaaaadim
Автор

I checked cpython because I was curious, it uses this formula:
`new_allocated = ((size_t)newsize + (newsize >> 3) + 6) & ~(size_t)3; `
which gives sizes: 0, 4, 8, 16, 24, 32, 40, 52, 64, 76
It seems to multiply by 9/8, add 6, then round down to a multiple of 4

Isaac-zydo
Автор

I'd LOVE a video on Forth. It's such an interesting language. Lisp is cool too, those two really feel like a yin yang type pair.
They're both really made from first principles and Forth starts at the machine, while Lisp starts at lambda calculus, which feel like opposite ends of some spectrum.

ArnaldurBjarnason
Автор

22:54 Python does have a prettier way of expressing that now, with pattern matching:

match lst:
case [head]: return head, []
case [head, *tail]: return head, construct(tail)

jamesarthurkimbell
Автор

I think constant time array construction would just be specifying the bounds of an array and the size of each element. This is assuming that the array is not populated with any useful data, making the method impractical.

josephwang
Автор

Happy Christmas 🎄🎁🎄🎁 and happy New Year 🕛🎊🎊🎊

moormoor
Автор

However, explanation how computer/cpu may do it, excelent, great job !

AK-vxdy
Автор

Awesome video, thank you! You mentioned it slightly but I was wondering how it works when, in JavaScript, an "array" is actually a dictionary/hashmap. I hope you can do a video about HashMaps in the future (maybe the most common implementation is to just loop through the map and look for the key, but this seems very inefficient)

kernelramdisk
Автор

@26:22 This big O(tetha) notation is teoretically correct but in current times very missleading, it misses te fact of dobule memory access and non-continuity of elements in memory
(you showed it like elements are adjacent like in array but it is often not true in real life) and in those times access to not adjacent address in memory is very costly.

AK-vxdy
Автор

I've seen a claim that the golden ratio is optimal for sufficiently large resize numbers (i.e. large, non preallocated arrays) as a realloc ratio, which is interesting

Isaac-zydo
Автор

Very nice video!

But I'm not so sure that SOTA real-world dynamic array implementations use a grow factor > 2.
E.g. folly uses one < 2 to work better with jemalloc behaviour.

Luca