[N] Ensuring Reliable Few-Shot Prompt Selection for LLMs - 30% Error Reduction

[N] Ensuring Reliable Few-Shot Prompt Selection for LLMs – 30% Error Reduction

[ad_1]

Hello Redditors!

Few-shot prompting is a pretty common technique used for LLMs. By providing a few examples of your data in the prompt, the model learns "on the fly" and produces better results — but what happens if the examples you provide are error-prone?

I spent some time playing around with Open AI's davinci LLM and I discovered that real-world data is messy and full of issues, which led to poor quality few-shot prompts and unreliable LLM predictions.

Unreliable prompts lead to unreliable predictions.

I wrote up a quick article that shows how I used data-centric AI to automatically clean the noisy examples pool in order to create higher quality few-shot prompts. The resulting predictions had 37% fewer errors than the same LLM using few-shot prompts from the noisy examples pool.

Let me know what you think!

submitted by /u/cmauck10
[comments]

[ad_2]

Source link


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *