filmov
tv
Metrics-driven tuning of Apache Spark at scale

Показать описание
Tuning Spark can be complex and difficult, since there are many different configuration parameters and metrics. As the Spark applications running on LinkedIn’s clusters become more diverse and numerous, it is no longer feasible for a small team of Spark experts to help individual users debug and tune their Spark applications. Users need to be able to get advice quickly and iterate on their development, and any problems need to be caught promptly to keep the cluster healthy. In order to achieve this, we automated the process of identifying performance issues and providing custom tuning advice to users, and made improvements for scaling to handle thousands of Spark applications per day.
We leverage Spark History Server (SHS) to gather application metrics, but as the number of Spark applications and size of individual applications have increased, the SHS has not been able to keep up. It can fall hours behind during peak usage. We will discuss changes to the SHS to improve efficiency, performance, and stability, enabling SHS to analyze large amount of logs.
Another challenge we encountered was a lack of proper metrics related to Spark application performance. We will present new metrics added to Spark which can precisely report resource usage during runtime and discuss how these are used in heuristics to identify problems. Based on this analysis, custom recommendations are provided to help users tune their applications.
We will also show the impact provided by these tuning recommendations, including improvements in application performance itself and the overall cluster utilization.
Speakers
EDWINA LU
Staff Software Engineer
LinkedIn
YE ZHOU
Software Engineer
LinkedIn. Inc
We leverage Spark History Server (SHS) to gather application metrics, but as the number of Spark applications and size of individual applications have increased, the SHS has not been able to keep up. It can fall hours behind during peak usage. We will discuss changes to the SHS to improve efficiency, performance, and stability, enabling SHS to analyze large amount of logs.
Another challenge we encountered was a lack of proper metrics related to Spark application performance. We will present new metrics added to Spark which can precisely report resource usage during runtime and discuss how these are used in heuristics to identify problems. Based on this analysis, custom recommendations are provided to help users tune their applications.
We will also show the impact provided by these tuning recommendations, including improvements in application performance itself and the overall cluster utilization.
Speakers
EDWINA LU
Staff Software Engineer
YE ZHOU
Software Engineer
LinkedIn. Inc