filmov
tv
Measuring Software Quality - Juliet Houg
![preview_player](https://i.ytimg.com/vi/tt4oUiGw6vM/maxresdefault.jpg)
Показать описание
Measuring Software Quality with Lessons from Epidemiologists, Actuaries and Charlatans
Is our software any good?
Is our work on it making it better or worse?
Can we quantify how much it has changed?
Engineering organizations face these questions constantly, and know there are not any easy answers. Luckily, we can draw on well known risk assessment techniques from epidemiologists and actuaries. We will explore the historic development of these ideas from studying the effects of smoking to setting maritime cargo insurance rates in babylon, ancient greece, and victorian england. This talk will focus on how Cloudera measures and compares quality of our software.
A useful as observational methods of risk assessment are, they are also easy to misuse and misinterpret. We will discuss some choice examples of misuse and abuse of analytic methods, with examples from Newton’s Principia to particle physicists, and hopefully avoid our own charlatanry in the future.
Juliet Hougland is a Data Scientist at Cloudera, where she does data science for the Engineering organization. Previously at Cloudera she built distributed machine learning pipelines for customers, advised customers on best practices for data science with big data, and developed and contribted to open source software such as SparklingPandas, Spark and Kiji. Her commercial applications of data science include developing predictive maintenance models for oil & gas pipelines at Deep Signal, and designing/building a platform for real-time model application, data storage, and model building at WibiData. Juliet was the technical editor for Learning Spark by Karau et al. and Advanced Analytics with Spark by Ryza et al.
She holds an MS in Applied Mathematics from University of Colorado, Boulder and graduated Phi Beta Kappa from Reed College with a BA in Math-Physics.
Is our software any good?
Is our work on it making it better or worse?
Can we quantify how much it has changed?
Engineering organizations face these questions constantly, and know there are not any easy answers. Luckily, we can draw on well known risk assessment techniques from epidemiologists and actuaries. We will explore the historic development of these ideas from studying the effects of smoking to setting maritime cargo insurance rates in babylon, ancient greece, and victorian england. This talk will focus on how Cloudera measures and compares quality of our software.
A useful as observational methods of risk assessment are, they are also easy to misuse and misinterpret. We will discuss some choice examples of misuse and abuse of analytic methods, with examples from Newton’s Principia to particle physicists, and hopefully avoid our own charlatanry in the future.
Juliet Hougland is a Data Scientist at Cloudera, where she does data science for the Engineering organization. Previously at Cloudera she built distributed machine learning pipelines for customers, advised customers on best practices for data science with big data, and developed and contribted to open source software such as SparklingPandas, Spark and Kiji. Her commercial applications of data science include developing predictive maintenance models for oil & gas pipelines at Deep Signal, and designing/building a platform for real-time model application, data storage, and model building at WibiData. Juliet was the technical editor for Learning Spark by Karau et al. and Advanced Analytics with Spark by Ryza et al.
She holds an MS in Applied Mathematics from University of Colorado, Boulder and graduated Phi Beta Kappa from Reed College with a BA in Math-Physics.