Loading...
「ツール」は右上に移動しました。
利用したサーバー: wtserver1
2848いいね 111,664 views回再生

State-Of-The-Art Prompting For AI Agents

At first, prompting seemed to be a temporary workaround for getting the most out of large language models. But over time, it's become critical to the way we interact with AI.

On the Lightcone, Garry, Harj, Diana, and Jared break down what they've learned from working with hundreds of founders building with LLMs: why prompting still matters, where it breaks down, and how teams are making it more reliable in production.

They share real examples of prompts that failed, how companies are testing for quality, and what the best teams are doing to make LLM outputs useful and predictable.

The prompt from Parahelp (S24) discussed in the episode: https://parahelp.com/blog/prompt-design

Chapters (Powered by https://chapterme.co/) -
0:00 Intro
0:58 Parahelp’s prompt example
4:59 Different types of prompts
6:51 Metaprompting
7:58 Using examples
12:10 Some tricks for longer prompts
14:18 Findings on evals
17:25 Every founder has become a forward deployed engineer (FDE)
23:18 Vertical AI agents are closing big deals with the FDE model
26:13 The personalities of the different LLMs
27:26 Lessons from rubrics
29:47 Kaizen and the art of communication
31:00 Outro

コメント