GopherCon 2018: Kavya Joshi - The Scheduler Saga

preview_player
Показать описание
he Go scheduler is the behind-the-scenes magical machine that powers Go programs. It efficiently runs goroutines, and also coordinates network IO and memory management.

Kavya’s talk will explore the inner workings of the scheduler machinery. She will delve into the M:N multiplexing of goroutines on system threads, and the mechanisms to schedule, unschedule, and rebalance goroutines. Kavya will also touch upon how the scheduler supports the netpoller and the memory management systems for goroutine stack resizing and heap garbage collection. Finally, she will evaluate the effectiveness and performance of the scheduler.
Рекомендации по теме
Комментарии
Автор

This really is a great talk. I spent most of the 90s working with task schedulers, and have recently been playing with Go. I'd never seen any explanation of how Go does scheduling, but this talk makes things crystal clear, and maps nicely onto my experience. Kavya clearly knows her stuff, and tries to keep things interesting by changing intonation and using a little stagecraft, rather than letting viewers drift off to sleep. Nice work!

andrewlaw
Автор

Absolutely amazing presentation. So organized... Thanks kavya for such a great content

AmarjeetAnandsingh
Автор

great presenter. i love her older go channels talk.

arhyth
Автор

Great talk and explanation, I would definitely watch this again to get a better grasp, thanks Kavya for this awesome talk.

blank
Автор

I've been stuck looking to improve an M:N scheduler for my OS/language... found this video and it answers a TON of design and performance questions I had. (Work stealing run queues were the keyword I was looking for) Excellent talk, thanks a bunch!

sbef
Автор

Not sure about you guys, But I'm a beginner and had to rewatch this presentation few times to understand it to the depth. Honestly the content is awesome.

varshathkumarterli
Автор

Amazing presentation! I was linked to this from Gophers Slack. I'm grateful for the guy who did that!

aseemsavio
Автор

this is so awesome and easy to understand. Thank you for sharing!

nowarm
Автор

Amazing and well organized presentation! 👍

jeffliang
Автор

Thank you a ton. Awesome presentation.

maheshkottapalli
Автор

Wow! Clean Presentation and Nicely explained. Thanks Kavya

ishnmu
Автор

Excellent content and presentation, thank you.

robgreen
Автор

nice presentation!personally i love her voice

wodeqiangne
Автор

Thanks for such an informative content ! Worth saving hours of googling 😀
Since we can create new threads for the goroutines waiting in the local runQ for threads which are blocked on a long running goroutine, as long as the number of goroutine running threads is <= the threshold limit.
If the scheduler does that and in future the blocked threads get unblocked and starts picking goroutines from runQ, then the total number of goroutine running threads will be greater than the limit. Will the scheduler destroy those extra threads then after shifting their runQ elements ?

alihussainkhan
Автор

25:25 nice summary, better watch this few times more

cadenzah
Автор

What about performing work stealing first from cores located on the same die so as to minimize cache misses…

jmcguckin
Автор

On 24:22, what if the T1 wake up? there will be 3 threads, which beyond the limitation of CPU cores?

richardyang
Автор

Nice talk but i still don't know how can you have 200k go routines, with this structure.

ovndfbs
Автор

Damn, she is trying so hard to sound mean!

andreypanin