Knowledge-Sharing

Prompt Engineering: Principles to Uphold

Prompt Engineering: Principles to Uphold

Sep 18, 2023

Prompt engineering has emerged as a controversial yet vital area in the thriving field of AIGC. In his presentation on State of GPT, Andrej Karpathy, Co-founder of OpenAI, briefly touched on the principles of prompt engineering. The slide deck for his presentation can be found at https://karpathy.ai/stateofgpt.pdf.

Within his presentation, two primary goals were highlighted:

State of GPT, Andrej Karpathy

To achieve the top possible performance, particularly in one-shot prompt debugging, I have further divided it into two specific aspects, Correctness and Accuracy. Therefore, to ensure the reliability of prompt engineering, it is imperative to uphold a set of guiding principles. These principles include:

1. Correctness: This involves verifying if the generated completions align with our expectations. For instance, if we expect the output in JSON format, we must ensure that models like LLM can comprehend and produce outputs in the expected JSON format.

2. Accuracy: After achieving increased creativity in generated completions, usually by using a temperature greater than zero, it is essential to verify if the completions continue to adhere to the prompt instructions. Every completion should accurately produce the intended result.

3. Cost: While maintaining correctness and accuracy, it is equally important to control the token count. Prompt engineering should strive to reduce token consumption effectively, improving efficiency and cost-effectiveness.

Prompter can efficiently assist you in improving the quality of your prompts from these three aspects. Feel free to give it a try.

Prompt engineering has emerged as a controversial yet vital area in the thriving field of AIGC. In his presentation on State of GPT, Andrej Karpathy, Co-founder of OpenAI, briefly touched on the principles of prompt engineering. The slide deck for his presentation can be found at https://karpathy.ai/stateofgpt.pdf.

Within his presentation, two primary goals were highlighted:

State of GPT, Andrej Karpathy

To achieve the top possible performance, particularly in one-shot prompt debugging, I have further divided it into two specific aspects, Correctness and Accuracy. Therefore, to ensure the reliability of prompt engineering, it is imperative to uphold a set of guiding principles. These principles include:

1. Correctness: This involves verifying if the generated completions align with our expectations. For instance, if we expect the output in JSON format, we must ensure that models like LLM can comprehend and produce outputs in the expected JSON format.

2. Accuracy: After achieving increased creativity in generated completions, usually by using a temperature greater than zero, it is essential to verify if the completions continue to adhere to the prompt instructions. Every completion should accurately produce the intended result.

3. Cost: While maintaining correctness and accuracy, it is equally important to control the token count. Prompt engineering should strive to reduce token consumption effectively, improving efficiency and cost-effectiveness.

Prompter can efficiently assist you in improving the quality of your prompts from these three aspects. Feel free to give it a try.

Prompt engineering has emerged as a controversial yet vital area in the thriving field of AIGC. In his presentation on State of GPT, Andrej Karpathy, Co-founder of OpenAI, briefly touched on the principles of prompt engineering. The slide deck for his presentation can be found at https://karpathy.ai/stateofgpt.pdf.

Within his presentation, two primary goals were highlighted:

State of GPT, Andrej Karpathy

To achieve the top possible performance, particularly in one-shot prompt debugging, I have further divided it into two specific aspects, Correctness and Accuracy. Therefore, to ensure the reliability of prompt engineering, it is imperative to uphold a set of guiding principles. These principles include:

1. Correctness: This involves verifying if the generated completions align with our expectations. For instance, if we expect the output in JSON format, we must ensure that models like LLM can comprehend and produce outputs in the expected JSON format.

2. Accuracy: After achieving increased creativity in generated completions, usually by using a temperature greater than zero, it is essential to verify if the completions continue to adhere to the prompt instructions. Every completion should accurately produce the intended result.

3. Cost: While maintaining correctness and accuracy, it is equally important to control the token count. Prompt engineering should strive to reduce token consumption effectively, improving efficiency and cost-effectiveness.

Prompter can efficiently assist you in improving the quality of your prompts from these three aspects. Feel free to give it a try.