Course Review: ChatGPT Prompt Engineering for Developers


Just finished the short course “ChatGPT Prompt Engineering for Developers” offered by DeepLearning.AI in partnership with OpenAI, and here are my key takeaways:

The course used gpt-3.5-turbo model (which has a plus of being substantially cheaper compared to its sister models) through OpenAI API.

The authors of the course Andrew Ng and Isabella Fulford called for transparency and responsibility when using AI models, and suggested signing your LLM-generated communications appropriately to indicate that it was generated by AI.

❗There is a known limitation called “hallucinations” where it outputs wrong information. Make sure to avoid this.

💡 The great thing about LLMs is: with only one prompt, you can complete multiple tasks, with customized output for each task.

Personally I have been testing ChatGPT with English and Russian texts with good results so far; but I found the chatbot’s Mongolian to be weak.

💡 There is also one parameter called temperature, and it impacts reliability, and predictability of the generated output.

The course is sectioned into the following parts, and you can read more in detail about each part from my article - I could not include everything in one post due to LinkedIn’s limit to characters.

  1. Prompting guidelines
  2. Iterative prompt development
  3. Summarizing
  4. Inferring
  5. Transforming
  6. Expanding
  7. Chatbot

10/10 would recommend taking this course!

👉 A follow-up: I just translated and summarized a 260-page book in 1.5 hours, and it only cost me $0.9!

If I manually did the summary, then it would have taken me days, or even weeks, probably.