Open-source LLM tracing, evals and prompt optimization with Evidently
Evidently AI
Open-source LLM tracing, evals and prompt optimization with Evidently
15:55
8. Tutorial: Adversarial testing for LLM applications
Evidently AI
8. Tutorial: Adversarial testing for LLM applications
13:24
7. Tutorial: Building and evaluating an AI agent
Evidently AI
7. Tutorial: Building and evaluating an AI agent
17:35
6.2. Tutorial: Building and evaluating a RAG system
Evidently AI
6.2. Tutorial: Building and evaluating a RAG system
16:13
6.1 How to evaluate a RAG system: methods and metrics
Evidently AI
6.1 How to evaluate a RAG system: methods and metrics
7:08
5. Tutorial: Evaluating LLMs on content generation tasks. Tracing and experiments.
Evidently AI
5. Tutorial: Evaluating LLMs on content generation tasks. Tracing and experiments.
26:01
4. Tutorial: Evaluating LLMs on classification tasks
Evidently AI
4. Tutorial: Evaluating LLMs on classification tasks
19:01
3. Tutorial: How to create an LLM judge and align with human labels
Evidently AI
3. Tutorial: How to create an LLM judge and align with human labels
24:24
2.3. Tutorial on LLM evaluation methods: Reference-free evals.
Evidently AI
2.3. Tutorial on LLM evaluation methods: Reference-free evals.
14:07
2.2. Tutorial on LLM evaluation methods: Reference-based evals.
Evidently AI
2.2. Tutorial on LLM evaluation methods: Reference-based evals.
10:26
2.1. Tutorial on LLM evaluation methods. Overview and Basic API.
Evidently AI
2.1. Tutorial on LLM evaluation methods. Overview and Basic API.
10:13
1. Introduction to LLM evaluations in 10 key ideas
Evidently AI
1. Introduction to LLM evaluations in 10 key ideas
11:43
LLM evaluation for builders - Course announcement
Evidently AI
LLM evaluation for builders - Course announcement
1:43
How to run LLM evals with no code | PRACTICE
Evidently AI
How to run LLM evals with no code | PRACTICE
12:27
How to continuously improve LLM products?
Evidently AI
How to continuously improve LLM products?
2:34
A business case for LLM evaluations
Evidently AI
A business case for LLM evaluations
4:06
AI agent and RAG evaluation
Evidently AI
AI agent and RAG evaluation
6:05
LLM observability in production: tracing and online evals
Evidently AI
LLM observability in production: tracing and online evals
5:09
LLM safety and red-teaming
Evidently AI
LLM safety and red-teaming
5:11
LLM-as-a-judge: evaluating LLMs with LLMs
Evidently AI
LLM-as-a-judge: evaluating LLMs with LLMs
5:41
LLM evaluation methods and metrics
Evidently AI
LLM evaluation methods and metrics
5:10
LLM evaluation datasets: test cases and synthetic data
Evidently AI
LLM evaluation datasets: test cases and synthetic data
6:06
How to evaluate an LLM application
Evidently AI
How to evaluate an LLM application
6:00
LLM evaluation benchmarks
Evidently AI
LLM evaluation benchmarks
3:07
What is an LLM-powered product?
Evidently AI
What is an LLM-powered product?
3:08
Welcome to the LLM evaluation course
Evidently AI
Welcome to the LLM evaluation course
1:32
Open-source LLM Evaluation with Evidently - Intro
Evidently AI
Open-source LLM Evaluation with Evidently - Intro
2:30
LLM Evaluation Tutorial with Evidently
Evidently AI
LLM Evaluation Tutorial with Evidently
35:45
6.5. Connecting the dots: full-stack ML observability.
Evidently AI
6.5. Connecting the dots: full-stack ML observability.
5:21
6.4. ML monitoring with Evidently and Grafana. [OPTIONAL CODE PRACTICE]
Evidently AI
6.4. ML monitoring with Evidently and Grafana. [OPTIONAL CODE PRACTICE]
25:03
6.3. ML model monitoring dashboard with Evidently. Online architecture. [CODE PRACTICE]
Evidently AI
6.3. ML model monitoring dashboard with Evidently. Online architecture. [CODE PRACTICE]
21:30
6.2. ML model monitoring dashboard with Evidently. Batch architecture. [CODE PRACTICE]
Evidently AI
6.2. ML model monitoring dashboard with Evidently. Batch architecture. [CODE PRACTICE]
32:18
6.1. How to deploy a live ML monitoring dashboard
Evidently AI
6.1. How to deploy a live ML monitoring dashboard
6:56
5.8. Log data drift test results to MLflow. [CODE PRACTICE]
Evidently AI
5.8. Log data drift test results to MLflow. [CODE PRACTICE]
16:47
5.7. Run data drift and model quality checks in a Prefect pipeline. [OPTIONAL CODE PRACTICE]
Evidently AI
5.7. Run data drift and model quality checks in a Prefect pipeline. [OPTIONAL CODE PRACTICE]
17:59
5.6. Run data drift and model quality checks in an Airflow pipeline. [OPTIONAL CODE PRACTICE]
Evidently AI
5.6. Run data drift and model quality checks in an Airflow pipeline. [OPTIONAL CODE PRACTICE]
28:49
5.5. Design a custom test suite with Evidently. [CODE PRACTICE]
Evidently AI
5.5. Design a custom test suite with Evidently. [CODE PRACTICE]
14:30
5.4. Test ML model outputs and quality. [CODE PRACTICE]
Evidently AI
5.4. Test ML model outputs and quality. [CODE PRACTICE]
18:39
5.3. Test input data quality, stability and drift. [CODE PRACTICE]
Evidently AI
5.3. Test input data quality, stability and drift. [CODE PRACTICE]
23:16
5.2. Train and evaluate an ML model. [OPTIONAL CODE PRACTICE]
Evidently AI
5.2. Train and evaluate an ML model. [OPTIONAL CODE PRACTICE]
33:05
5.1. Introduction to data and ML pipeline testing.
Evidently AI
5.1. Introduction to data and ML pipeline testing.
8:26
4.7. How to choose the ML monitoring deployment architecture.
Evidently AI
4.7. How to choose the ML monitoring deployment architecture.
19:46
4.6. Implementing custom metrics in Evidently [OPTIONAL, CODE PRACTICE]
Evidently AI
4.6. Implementing custom metrics in Evidently [OPTIONAL, CODE PRACTICE]
31:25
4.5. Custom metrics in ML monitoring
Evidently AI
4.5. Custom metrics in ML monitoring
4:36
4.4. How to choose a reference dataset in ML monitoring.
Evidently AI
4.4. How to choose a reference dataset in ML monitoring.
12:01
4.3. When to retrain machine learning models.
Evidently AI
4.3. When to retrain machine learning models.
17:21
4.2. How to prioritize ML monitoring metrics
Evidently AI
4.2. How to prioritize ML monitoring metrics
14:41
4.1. Logging for ML monitoring
Evidently AI
4.1. Logging for ML monitoring
10:01
3.6. Monitoring multimodal datasets.
Evidently AI
3.6. Monitoring multimodal datasets.
5:33
3.5. Monitoring text data and embeddings. [CODE PRACTICE].
Evidently AI
3.5. Monitoring text data and embeddings. [CODE PRACTICE].
33:53
3.4. Monitoring embeddings drift
Evidently AI
3.4. Monitoring embeddings drift
6:26
3.3. Monitoring text data quality and data drift with descriptors
Evidently AI
3.3. Monitoring text data quality and data drift with descriptors
4:53
3.2. Monitoring data drift on raw text data.
Evidently AI
3.2. Monitoring data drift on raw text data.
7:24
3.1. Introduction to NLP and LLM monitoring
Evidently AI
3.1. Introduction to NLP and LLM monitoring
6:55
2.8. Data and prediction drift in ML. [CODE PRACTICE]
Evidently AI
2.8. Data and prediction drift in ML. [CODE PRACTICE]
20:45
2.5. Data quality in ML. [CODE PRACTICE].
Evidently AI
2.5. Data quality in ML. [CODE PRACTICE].
19:00
2.3. Evaluating ML model quality. [CODE PRACTICE].
Evidently AI
2.3. Evaluating ML model quality. [CODE PRACTICE].
22:50
2.7. Deep dive into data drift detection. [OPTIONAL]
Evidently AI
2.7. Deep dive into data drift detection. [OPTIONAL]
23:06
2.6. Data and prediction drift in ML.
Evidently AI
2.6. Data and prediction drift in ML.
16:35
2.4. Data quality in machine learning.
Evidently AI
2.4. Data quality in machine learning.
8:33
2.2. Overview of ML quality metrics. Classification, regression, ranking.
Evidently AI
2.2. Overview of ML quality metrics. Classification, regression, ranking.
25:22
2.1. How to evaluate ML model quality.
Evidently AI
2.1. How to evaluate ML model quality.
5:00
1.5. ML monitoring architectures.
Evidently AI
1.5. ML monitoring architectures.
10:31
1.4. Key considerations for ML monitoring setup.
Evidently AI
1.4. Key considerations for ML monitoring setup.
9:23
1.3. ML monitoring metrics. What exactly can you monitor?
Evidently AI
1.3. ML monitoring metrics. What exactly can you monitor?
5:15
1.2. What is ML monitoring and observability?
Evidently AI
1.2. What is ML monitoring and observability?
8:12
1.1. ML lifecycle. What can go wrong with ML in production?
Evidently AI
1.1. ML lifecycle. What can go wrong with ML in production?
5:48
Open-Source ML Observability Course - Welcome Video
Evidently AI
Open-Source ML Observability Course - Welcome Video
2:29
Evidently - Core Concepts - Metric, Report, Test, Test Suite, Preset
Evidently AI
Evidently - Core Concepts - Metric, Report, Test, Test Suite, Preset
4:23
Evidently 0.2 - Getting Started Tutorial
Evidently AI
Evidently 0.2 - Getting Started Tutorial
18:20
ML monitoring with Evidently. A tutorial from CS 329S: Machine Learning Systems Design.
Evidently AI
ML monitoring with Evidently. A tutorial from CS 329S: Machine Learning Systems Design.
41:02
How to run Evidently using the Command Line
Evidently AI
How to run Evidently using the Command Line
7:12
Using column mapping in Evidently reports
Evidently AI
Using column mapping in Evidently reports
7:54
How to use Evidently in Jupyter Notebook to evaluate data and prediction drift in ML models
Evidently AI
How to use Evidently in Jupyter Notebook to evaluate data and prediction drift in ML models
10:04
Introducing Evidently, an open-source tool for ML model monitoring
Evidently AI
Introducing Evidently, an open-source tool for ML model monitoring
1:40
Evidently - Open-source tool for ML monitoring - 2min demo for Jupyter notebook
Evidently AI
Evidently - Open-source tool for ML monitoring - 2min demo for Jupyter notebook
1:57
Machine Learning Monitoring: What Is Concept Drift?
Evidently AI
Machine Learning Monitoring: What Is Concept Drift?
5:10
Machine Learning Monitoring: What Is Data Drift?
Evidently AI
Machine Learning Monitoring: What Is Data Drift?
3:06
What Is Machine Learning Model Monitoring?
Evidently AI
What Is Machine Learning Model Monitoring?
3:40
Evidently AI Intro - Hello World, and Happy Holidays
Evidently AI
Evidently AI Intro - Hello World, and Happy Holidays
0:56