filmov
tv
True ML Talks #17 |All things Machine Learning and AI at Slack
Показать описание
Welcome to the next episode of TrueMLTalks. Join us as we host Katrina Ni, a senior ML engineer and lead member of the technical staff at Slack. Slack is one of the biggest conversational platforms for organizations, and Katrina has been leading the ML team there, working on recommend API, spam detection, and other product functionalities. Dig up some of the use cases at Slack that are powered by ML and Gen AI, as well as how they affect the user experience, etc.
During our talk, Katrina Ni covered the following subjects:
✅ The use cases of ML and Gen AI powers at Slack
✅ The architecture of the infrastructure and ML stack at Slack
✅ Model training pipeline of recommend API
✅ Hierarchical corpus and sources, training different models for each use case, and automated training with Airflow
✅ Offline and online metrics tracking, detecting feature drift, and the importance of retraining models regularly
✅ The challenges and constraints of dealing with customer privacy
✅ The journey of Slack GPT and the applications it empowers
✅ The role of prompt engineering and the future of ML models at Slack
00:00 Start.
01:39 Use cases at Slack by ML and how it impacts the user journey
04:28 Infrastructure Evolution at Slack
09:52 Deep dive into how Katrina enhanced recommend API at slack
12:59: Architecture of model training pipeline of Recommend API
15:20 Why Slack uses Airflow over Kubeflow or Metaflow
18:38 All about Slack’s Feature Engineering layer ?
20:14 Process of moving the trained models to production
22:15 Deployment of models and challenges faced while doing so
23:32 Frameworks used in model monitoring pipelines
25:20 Is the process of retraining pipeline automatic or manual?
26:50 Metrics used to measure the business impact of recommend api
28:01 Core principles around which the stack is built
29:13 What models did you train for the data privacy
31:18 All about Slack GPT and its use cases
32:57 Changes in infrastructure while training LLM’s
37:16 Katrina’s take on the future of prompt engineering.
ABOUT OUR GUEST
Katrina Ni is a Machine Learning Engineer in Slack ML Services Team where they build ML platform and integrate ML, e.g. Recommend API, Spam Detection, across product functionalities. Prior to Slack, she was a Software Engineer in Tableau Explain Data Team where they build tools that utilise statistical models and propose possible explanations to help users inspect, uncover, and dig deeper into the viz.
⚡ Visit the following URL to get in touch with Katrina Ni: -
ABOUT OUR CHANNEL
We converse with machine learning specialists from companies including Gong, Intuit, SalesForce, Facebook, DoorDash, and others in our episodes of TrueMLTalks. It offers an informed perspective of their experiences managing complex machine learning pipelines and creating efficient best practises, which will be helpful to professionals who wish to keep abreast of the times with the newest developments in the field.
_____________________________________
ABOUT TRUEFOUNDRY
TrueFoundry specialises in providing a comprehensive ML/LLM Deployment PaaS, empowering ML teams in enterprises to test and deploy ML/LLMs with ease. Our platform ensures full security for the infrastructure team, reduces costs by 40% through resource management, and enables 90% faster deployments by following SRE best practices. Additionally, we offer pre-configured models from our catalog, allowing users to fine-tune them on their datasets, particularly for LLM/GPT-style models.
During our talk, Katrina Ni covered the following subjects:
✅ The use cases of ML and Gen AI powers at Slack
✅ The architecture of the infrastructure and ML stack at Slack
✅ Model training pipeline of recommend API
✅ Hierarchical corpus and sources, training different models for each use case, and automated training with Airflow
✅ Offline and online metrics tracking, detecting feature drift, and the importance of retraining models regularly
✅ The challenges and constraints of dealing with customer privacy
✅ The journey of Slack GPT and the applications it empowers
✅ The role of prompt engineering and the future of ML models at Slack
00:00 Start.
01:39 Use cases at Slack by ML and how it impacts the user journey
04:28 Infrastructure Evolution at Slack
09:52 Deep dive into how Katrina enhanced recommend API at slack
12:59: Architecture of model training pipeline of Recommend API
15:20 Why Slack uses Airflow over Kubeflow or Metaflow
18:38 All about Slack’s Feature Engineering layer ?
20:14 Process of moving the trained models to production
22:15 Deployment of models and challenges faced while doing so
23:32 Frameworks used in model monitoring pipelines
25:20 Is the process of retraining pipeline automatic or manual?
26:50 Metrics used to measure the business impact of recommend api
28:01 Core principles around which the stack is built
29:13 What models did you train for the data privacy
31:18 All about Slack GPT and its use cases
32:57 Changes in infrastructure while training LLM’s
37:16 Katrina’s take on the future of prompt engineering.
ABOUT OUR GUEST
Katrina Ni is a Machine Learning Engineer in Slack ML Services Team where they build ML platform and integrate ML, e.g. Recommend API, Spam Detection, across product functionalities. Prior to Slack, she was a Software Engineer in Tableau Explain Data Team where they build tools that utilise statistical models and propose possible explanations to help users inspect, uncover, and dig deeper into the viz.
⚡ Visit the following URL to get in touch with Katrina Ni: -
ABOUT OUR CHANNEL
We converse with machine learning specialists from companies including Gong, Intuit, SalesForce, Facebook, DoorDash, and others in our episodes of TrueMLTalks. It offers an informed perspective of their experiences managing complex machine learning pipelines and creating efficient best practises, which will be helpful to professionals who wish to keep abreast of the times with the newest developments in the field.
_____________________________________
ABOUT TRUEFOUNDRY
TrueFoundry specialises in providing a comprehensive ML/LLM Deployment PaaS, empowering ML teams in enterprises to test and deploy ML/LLMs with ease. Our platform ensures full security for the infrastructure team, reduces costs by 40% through resource management, and enables 90% faster deployments by following SRE best practices. Additionally, we offer pre-configured models from our catalog, allowing users to fine-tune them on their datasets, particularly for LLM/GPT-style models.