Evidently AI
Open-source LLM tracing, evals and prompt optimization with Evidently
15:55
Evidently AI
8. Tutorial: Adversarial testing for LLM applications
13:24
Evidently AI
7. Tutorial: Building and evaluating an AI agent
17:35
Evidently AI
6.2. Tutorial: Building and evaluating a RAG system
16:13
Evidently AI
6.1 How to evaluate a RAG system: methods and metrics
7:08
Evidently AI
5. Tutorial: Evaluating LLMs on content generation tasks. Tracing and experiments.
26:01
Evidently AI
4. Tutorial: Evaluating LLMs on classification tasks
19:01
Evidently AI
3. Tutorial: How to create an LLM judge and align with human labels
24:24
Evidently AI
2.3. Tutorial on LLM evaluation methods: Reference-free evals.
14:07
Evidently AI
2.2. Tutorial on LLM evaluation methods: Reference-based evals.
10:26
Evidently AI
2.1. Tutorial on LLM evaluation methods. Overview and Basic API.
10:13
Evidently AI
1. Introduction to LLM evaluations in 10 key ideas
11:43
Evidently AI
LLM evaluation for builders - Course announcement
1:43
Evidently AI
How to run LLM evals with no code | PRACTICE
12:27
Evidently AI
How to continuously improve LLM products?
2:34
Evidently AI
A business case for LLM evaluations
4:06
Evidently AI
AI agent and RAG evaluation
6:05
Evidently AI
LLM observability in production: tracing and online evals
5:09
Evidently AI
LLM safety and red-teaming
5:11
Evidently AI
LLM-as-a-judge: evaluating LLMs with LLMs
5:41
Evidently AI
LLM evaluation methods and metrics
5:10
Evidently AI
LLM evaluation datasets: test cases and synthetic data
6:06
Evidently AI
How to evaluate an LLM application
6:00
Evidently AI
LLM evaluation benchmarks
3:07
Evidently AI
What is an LLM-powered product?
3:08
Evidently AI
Welcome to the LLM evaluation course
1:32
Evidently AI
Open-source LLM Evaluation with Evidently - Intro
2:30
Evidently AI
LLM Evaluation Tutorial with Evidently
35:45
Evidently AI
6.5. Connecting the dots: full-stack ML observability.
5:21
Evidently AI
6.4. ML monitoring with Evidently and Grafana. [OPTIONAL CODE PRACTICE]
25:03
Evidently AI
6.3. ML model monitoring dashboard with Evidently. Online architecture. [CODE PRACTICE]
21:30
Evidently AI
6.2. ML model monitoring dashboard with Evidently. Batch architecture. [CODE PRACTICE]
32:18
Evidently AI
6.1. How to deploy a live ML monitoring dashboard
6:56
Evidently AI
5.8. Log data drift test results to MLflow. [CODE PRACTICE]
16:47
Evidently AI
5.7. Run data drift and model quality checks in a Prefect pipeline. [OPTIONAL CODE PRACTICE]
17:59
Evidently AI
5.6. Run data drift and model quality checks in an Airflow pipeline. [OPTIONAL CODE PRACTICE]
28:49
Evidently AI
5.5. Design a custom test suite with Evidently. [CODE PRACTICE]
14:30
Evidently AI
5.4. Test ML model outputs and quality. [CODE PRACTICE]
18:39
Evidently AI
5.3. Test input data quality, stability and drift. [CODE PRACTICE]
23:16
Evidently AI
5.2. Train and evaluate an ML model. [OPTIONAL CODE PRACTICE]
33:05
Evidently AI
5.1. Introduction to data and ML pipeline testing.
8:26
Evidently AI
4.7. How to choose the ML monitoring deployment architecture.
19:46
Evidently AI
4.6. Implementing custom metrics in Evidently [OPTIONAL, CODE PRACTICE]
31:25
Evidently AI
4.5. Custom metrics in ML monitoring
4:36
Evidently AI
4.4. How to choose a reference dataset in ML monitoring.
12:01
Evidently AI
4.3. When to retrain machine learning models.
17:21
Evidently AI
4.2. How to prioritize ML monitoring metrics
14:41
Evidently AI
4.1. Logging for ML monitoring
10:01
Evidently AI
3.6. Monitoring multimodal datasets.
5:33
Evidently AI
3.5. Monitoring text data and embeddings. [CODE PRACTICE].
33:53
Evidently AI
3.4. Monitoring embeddings drift
6:26
Evidently AI
3.3. Monitoring text data quality and data drift with descriptors
4:53
Evidently AI
3.2. Monitoring data drift on raw text data.
7:24
Evidently AI
3.1. Introduction to NLP and LLM monitoring
6:55
Evidently AI
2.8. Data and prediction drift in ML. [CODE PRACTICE]
20:45
Evidently AI
2.5. Data quality in ML. [CODE PRACTICE].
19:00
Evidently AI
2.3. Evaluating ML model quality. [CODE PRACTICE].
22:50
Evidently AI
2.7. Deep dive into data drift detection. [OPTIONAL]
23:06
Evidently AI
2.6. Data and prediction drift in ML.
16:35
Evidently AI
2.4. Data quality in machine learning.
8:33
Evidently AI
2.2. Overview of ML quality metrics. Classification, regression, ranking.
25:22
Evidently AI
2.1. How to evaluate ML model quality.
5:00
Evidently AI
1.5. ML monitoring architectures.
10:31
Evidently AI
1.4. Key considerations for ML monitoring setup.
9:23
Evidently AI
1.3. ML monitoring metrics. What exactly can you monitor?
5:15
Evidently AI
1.2. What is ML monitoring and observability?
8:12
Evidently AI
1.1. ML lifecycle. What can go wrong with ML in production?
5:48
Evidently AI
Open-Source ML Observability Course - Welcome Video
2:29
Evidently AI
Evidently - Core Concepts - Metric, Report, Test, Test Suite, Preset
4:23
Evidently AI
Evidently 0.2 - Getting Started Tutorial
18:20
Evidently AI
ML monitoring with Evidently. A tutorial from CS 329S: Machine Learning Systems Design.
41:02
Evidently AI
How to run Evidently using the Command Line
7:12
Evidently AI
Using column mapping in Evidently reports
7:54
Evidently AI
How to use Evidently in Jupyter Notebook to evaluate data and prediction drift in ML models
10:04
Evidently AI
Introducing Evidently, an open-source tool for ML model monitoring
1:40
Evidently AI
Evidently - Open-source tool for ML monitoring - 2min demo for Jupyter notebook
1:57
Evidently AI
Machine Learning Monitoring: What Is Concept Drift?
5:10
Evidently AI
Machine Learning Monitoring: What Is Data Drift?
3:06
Evidently AI
What Is Machine Learning Model Monitoring?
3:40
Evidently AI
Evidently AI Intro - Hello World, and Happy Holidays
0:56