02-29-2024 23:48 PM
ChatGPT is a revolutionary Artificial Intelligence (AI) technology that has the power to change the way we work. Now that Microsoft has invested heavily in OpenAI, it’s only a matter of time before their capabilities are built into the Microsoft 365 and Azure platforms. By using Power Automate to integrate that technology into Microsoft Teams, you can effortlessly access those AI capabilities directly from a Teams channel now without having to wait for Microsoft. You will only need one OpenAI account in order for everyone in that channel to use ChatGPT from within Teams. Anyone with access to that channel can simply post a question to ChatGPT, and an automated cloud flow will trigger. The flow will get the message details from the user and pass the question, as a variable, to an HTTP POST request to ChatGPT. Once the request returns the answer response from that site, it is used as a variable to reply to the user who asked it in the thread where it was asked.
https://fortechsupport.com/blog/learn-how-to-add-chatgpt-to-microsoft-teams/
Please let me know what you think.
watch?v=U00hiV1I_S0
According to the training I watched, toggling the option for Chat history & training off is the only apparent way to stop your data from being used in the training and prevents it potentially be made visible to someone else. I would also like to know if that is not the case.
Hi Mate ,
I don't think that this is the solution to be honest.
When you click on your name on bottom left
Settings > Data Controls, you have option to toggle the option off even if you don't have a subscription:
I'm not sure if this also affects the integration with API. Someone more well versed in this would be able to clarify that.
The Only issue here is that ChatGPT is working on Public domain. It can pass information about your clients to others if it thinks that this information is relevant to them.
So i believe the only way to stop this, is to buy a subscription?
Purchasing a subscription for GPT4 has nothing to do with Integration. The Integration uses API and when you enter your billing information in API, you get access to all the models which is charged per 1K tokens.
This is the code I'm using in HTTP:
{
"model": "gpt-4-1106-preview",
"messages": [
{
"role": "user",
"content": "@{variables('prompt')}"
}
],
"max_tokens": 1000,
"temperature": 0.7
}
and the URI for HTTP is updated to https://api.openai.com/v1/chat/completions
Hello, I have encountered an issue. I wrote the code exactly as shown in the video, except that the model "davinci-003" is currently deprecated, and I used "model": "gpt-3.5-turbo-0613". When I input a request to the GPT chat, it seems like the code is executing, but it's not. I waited for up to 15 minutes, and the result is as shown in the screenshot. Could this be related to the model I'm using? I was thinking of trying GPT-4 chat; I have the option to purchase a subscription, but I'm unsure if it's worth it. The issue might be entirely different.
Hey, I'm trying to help my supervisor research this possibility, but we can't find the OpenAI Documentation that you said in the video had more details on how to do this? You would happen to know where we could find this do you? I've looked on the OpenAI website and I haven't found it.
@MFS Thank you. I also noticed that the flow is slow to do anything at times. Can this be improved? Or is it an OpenAI thing?
I changed the trigger from keyword to when a message is posted in a channel.
The only issue I'm having with this is PowerAutomate sometimes takes upto 2 minutes to detect the message is posted. Otherwise the flow is working perfectly.
I also setup another flow to @mention for the bot and that triggers immediately. Only issue with this is the other flow also triggers and get 2 responses for the same question.
@Adam_deMont Thank you for this.
When I set this up and I run it, I get a reply to my question from ChatGPT, but then it starts posting another question over and over in the same channel - do you know why this is and how it can be stopped?
Thank you!