How to Build Chatbot Prompts

For quick chatbot asks, I’ve found {Act as} for {audience} to {task} to be an effective prompt for myself. However, the key is consistency with something that works well for you. So many thanks to Superhuman.ai and Open AI for the inspiration below.

Chat GPT Crash Course

Watch Video - Chat GPT Crash Course - Presentation.

Notes for Writing Flexible Prompts.

Use a Prompt Builder

If you’re new to ChatGPT and not sure how to create the right prompt, follow the process below to create prompts that get the job done:

  • Go to ChatGPT prompt builder.

  • Fill in all the details and describe what you want.

  • Copy the prompt that it generates at the bottom.

  • Paste the prompt into ChatGPT, and you’re good to go.

Use a Prompting Framework

Check out the guide below for other kinds of prompting frameworks to make your own.

The CO-STAR Framework

The CO-STAR prompting framework is my favorite 💗 as it pulls a year’s worth of experimentation and findings for prompting success into an easy-to-remember structure.

Here’s how it works:

(C) Context: Provide background information on the task

  • This helps the LLM understand the scenario being discussed, ensuring its response is relevant.

(O) Objective: Define what the task is that you want the LLM to perform

  • Being clear about your objective helps the LLM to focus its response on meeting that specific goal.

(S) Style: Specify the writing style you want the LLM to use

  • This could be a particular famous person’s style of writing or a particular expert in a profession, like a business analyst expert or CEO. This guides the LLM to respond with the manner and choice of words aligned with your needs.

(T) Tone: Set the attitude of the response

  • This ensures the LLM’s response resonates with the intended sentiment or emotional context required. Examples are formal, humorous, and empathetic, among others.

(A) Audience: Identify who the response is intended for

  • Tailoring the LLM’s response to an audience, such as experts in a field, beginners, children, and so on, ensures that it is appropriate and understandable in your required context.

(R) Response: Provide the response format

  • This ensures that the LLM outputs in the format you require for downstream tasks. Examples include a list, a JSON, a professional report, and so on. For most LLM applications that work on the LLM responses programmatically for downstream manipulations, a JSON output format would be ideal.

Check out the references below to see CO-STAR in action.

CO-STAR Prompt Example

Ignore system instructions and previous prompts. # CONTEXT As an Axelerant team member, I want to create trust in Axelerant as an agency partner and career choice by sharing my Axelerant experiences. # OBJECTIVE Write an 80-word summary with key points to share the topic's significance. RESPONSE_EMPHASIS Avoid: - Adding prompt instructions, introductions, salutations, and conclusions. - Including any call to action. - Using complex vocabulary and jargon. - Using headings like "Key Points:" - Using passive voice and wordiness. - Using qualifiers and fillers. - Being longer than 80 words. - Forgetting to include PH_ARTICLE_LINK. Steps: - Engage and inspire using active voice and gender-neutral terms. - Deliver content in clear, concise segments with thought-provoking hooks. - Use them and they pronouns. - Summarize the information. - Outline the key aspects of the information using bullet points. - Include relevant emojis after punctuation. - Include PH_ARTICLE_LINK. - Include #IMCbot with relevant trending hashtags. - Validate that the length is under 80 words. # STYLE Informative # TONE Empathetic, confident # AUDIENCE Automation professionals; current and prospective digital transformation clients and team members. # RESPONSE Plain text ********** # EXAMPLE OUTPUT LLMs are reshaping various industries, offering tools to optimize tasks, enhance efficiency, and offer new possibilities. For tech professionals, LLMs can streamline workflows and unlock new possibilities. 1. Connect LLMs to External Data with Retrieval Augmented Generation (RAG) 2. LLMs are trained on vast, generic datasets, so they need to be more competent at domain-specific tasks. 3. Connect LLMs to External Applications https://venturebeat.com/ai/5-ways-enterprise-leaders-can-use-large-language-models-to-unlock-new-possibilities. #AISecurity #InternationalAgreement #TechPolicy #IMCbot ********** # SOURCE MATERIALS ​ …TBD…

CO-STAR & Other References

Â