10-04-2023 21:11 PM
Sharing this automated cloud flow on How to extract text from emails using AI Builder GPT capability.
Prerequisites
Here’s the instructions:
Extract the `orderNumber`, `deliveryNumber` and find the `status` of the following email.
The status can be "Done" or "N/A" or "Undone".
Use CSV format for your answer.
Add headers "Order Number", "Delivery Number" and "Status".
Add a line break at the end of your CSV lines.
If the text below has less than couple of words or looks like a placeholder text, respond “Sorry, I can’t extract information”, otherwise respond with the extracted information.
[Start of text]
Select the plain text content from the Dynamic content list
[End of text]
Congratulations, you've created a flow that uses an AI Builder to create text with GPT capability.
⚠️ Make sure that AI-generated content is accurate and appropriate before you use it
watch?v=UchRykL7me8
I used this type of workflow to write directly into a sharepoint list. I am not getting errors in flow checker preventing a save saying I require content approvals. Is this a new hardstop when using the "create text with gpt"?
Hi @ashharv
I appreciate your question and understand your concern. It sounds like you're having some issues with the workflow in writing directly into a Sharepoint list, and you're seeing errors that require content approvals. This can indeed seem like a hurdle, but let me clarify why it's essential.
When working with Generative AI, such as the "create text with GPT" feature, human oversight is pivotal. The current models, although advanced, have a tendency to fabricate or 'hallucinate' information. As part of our commitment to responsible AI, we already have measures in place to moderate content and prevent the generation of any inappropriate material. However, controlling the tendency of the model to fabricate responses isn't as straightforward. Even though we guide the model to stay as factual as possible, it can sometimes deviate and produce misleading outputs.
Moreover, there's also a significant risk related to 'prompt injection' in the data. Even with well-crafted prompts and automation, compromised incoming data can lead to unexpected results.
Therefore, it's crucial to have human review over the generated content. This helps to mitigate the risks associated with 'hallucination' and 'prompt injection'. I hope this gives you a clearer understanding of why content approvals are necessary.
We're consistently exploring ways to alleviate these risks and streamline the process. As we discover new solutions, we'll incorporate them to enhance the experience for all users of automation with GPT. Your patience and understanding are greatly appreciated as we continue to improve.