AI on top of Prompt Engineering to Mentor Generative AI for better results

Typically, data leads to information, information leads to knowledge, and insight from knowledge leads to wisdom. This is what we have learned from the inception of business intelligence. However, with the intent of Generative AI, now it is wisdom, which mostly plays an important role to generate the information as per our ask.

So, it is wisdom that is the new opportunity in the name of prompt engineering to sit on top of Generative AI and fuel its efficiency to provide more accurate information without losing the context.

The question is how we can be a smart prompt engineer so that all the permutations and combinations a prompt can have, to a subject, for extracting the information from Generative AI are mostly available for our reference. 

With this can we say, still the same old theory where it is the data and information which precedes have wisdom in place holds true to capture all the deviations of response from a set of prompts that our minds can think of. What it suggests is that the application of Low Language Models is not only applicable to creating content from Generative AI but is equally applicable to generating smart prompts along with an application of techniques of reinforcement learning.

The thought here is to have continuous feedback in place to improve prompt engineering corpus while:

  •  Capturing all the deviations of response against different prompts around a context when LLM / AI models are getting training and validated.
  • When the model is in use, some methods in place looking at data privacy and security need to capture the varied phrases of prompt engineering from diverse sets of users.

This will involve putting a continuous learning loop in place to capture precision, recall, and other relevant metrics for each set of prompts and their response, so that there is a corpus of prompts for various contexts, and it can be auto-populated to assist the user in providing the information, what prompt will yield what results along with the associated metrics. This will enable not only the user to get the information that they are looking for but also to understand any limitations and biases associated with it.

So, the availability of lists of prompts with the metrics will not only improve the accuracy of the content generated but also will help us to keep our sustainability goal in control. As we all know Generative AI consumes a very high level of computing power and every time, not a proper prompt will exude an amount of energy that will have a direct hit on our sustainability goals.

 The thought seems to be very intuitive however have a solid impact on the:

  • Do we know from our past learning what is the precision, and recall of different prompts have already been tested?
  • Are we smart enough to ask the right prompt?
  • Is our machine also leveraging our wisdom to ask the prompt with the right context?
  • Are we sensitive to our net zero target which seems impossible with the computing power needs of Generative AI?

If you have answers to these questions then you can very well say, “My wisdom sits on top of Generative AI and it is the ability of my left and right brains which is driving Generative AI, not the other way around.”

To view or add a comment, sign in

More articles by Ambrish K Srivastava

Insights from the community

Explore topics