'Managing Containers at Scale with CoreOS and Kubernetes' by Kelsey Hightower

preview_player
Показать описание
The last decade belonged to virtual machines and the next one belongs to containers.

Virtualization lead to an explosion in the number of machines in our infrastructure and we were all caught off guard. It turns out that those shell scripts did not scale after all. Lucky for us configuration management swooped in to save the day.

But a new challenge is on the horizon. We are moving away from node-based infrastructures where hostnames are chosen with care. Gone are the days of pinning one service to a specific host and wearing a pager in case that host goes down.

Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale.

Linux containers provide the ability to reliably deploy thousands of application instances in seconds. And we can manage it all with CoreOS and Kubernetes. This talk will help attendees wrap their minds around complex topics such as distributed configuration management, service discovery, and application scheduling at scale. It will discuss CoreOS, a Linux distribution designed specifically for application containers and running them at scale. It will examine all the major components of CoreOS including etcd, fleet, docker, and systemd; and how these components work together with Kubernetes to solve the problems of today and tomorrow.

Kelsey Hightower
COREOS
@kelseyhightower

Kelsey Hightower is product manager, developer and chief advocate at CoreOS Kelsey has worn every hat possible throughout his career in tech and enjoys leadership roles focused on making things happen and shipping software. Kelsey is a strong open source advocate focused on building simple tools that make people smile. When he is not slinging Go code you can catch him giving technical workshops covering everything from Programming, System Administration, and his favorite Linux distro (CoreOS).
Рекомендации по теме
Комментарии
Автор

Very well presented. Loved the Tetris analogy!

jasongarland
Автор

Awesome presentation, even in 2019. Very well done!

pojntfxlegacy
Автор

The best technical presentation I have ever watched.

sipatha
Автор

Excellent presentation! Just had 1 question for you: with 5 servers in the cluster, each of them can only expose port 80 once. Resources aside, can you essentially create infinite pods (because they are assigned private IP addresses with say port 80 exposed on each of them). and load balance them via a service + the kubernetes API ? The result of this would open port 36000 on each node but would load balance via the service to however many containers right?

Thanks in advance! I'm trying to grasp everything before I effectively replace my entire home lab (2x 16 cpu servers with 48 gb ram each) with kubernetes and then later move it into a work setting (once I feel comfortable with everything).

EDIT: Forgot to ask about storage and how to handle that stuff as well. I understand the configurations are saved to etcd but I was wondering how data in redis (from your example) or mysql even could be reliably backed up/shared over 5 different physical nodes.

carl