12-19-2023 22:12 PM
Extract Data From PDFs & Images With GPT
This template uses AI Builder's OCR for PDFs & Images to extract the text present in a file, replicates the file in a text (txt) format, then passes it off to a GPT prompt action for things like data extraction.
Seems to have a 85% or greater reliability for returning requested data fields from most PDFs. It's likely good enough to do more direct data entry on some use-cases with well formatted, clean PDFs, and in many other cases it is good at doing a 1st pass on a file & providing a default / pre-fill value for fields before a person then checks & completes something with the data.
It does not require training on different formats, styles, wording, etc. It works on multiple pages at once. And you can always adjust the prompt to extract the different data you want on different documents & adjust how you want the data to be represented in the output.
It also...
-Runs in less than a minute, usually 10-35 seconds, so it can respond in time for a Power Apps call.
-Handles 10-20 document pages at a time given the recent Create text with GPT update to a 16k model.
-Does not use additional 3rd party services, maintaining better data privacy.
The AI Builder Recognize text action returns a JSON array of each piece of text found in the PDF or image.
The Convert to txt loop goes through each vertical line in the PDF or image & creates a line of text to approximately match both the text & spacing between text for that line.
Each vertical line of text is then combined into a single block of text like a big txt file in the final Compose action, before it is then passed to GPT through the AI Builder Create text action.
Example
Demonstration Invoice Example...
The AI Builder action uses optical character recognition (OCR) on this invoice PDF to return each piece of text & its associated x, y coordinates.
Then the Convert to txt loop produces this output shown in the final Compose...
And if we copy that output over to a text (txt) notebook, then this is what it looks like...
That is then fed into this GPT action prompt...
Which produced this output...
{
"Invoice Date": "2022-09-20",
"Invoice Number": "8304933707",
"Purchase Order (PO) Number": "PO10022556-NIMR",
"Incoterms": "DAP",
"Delivery Or Ship To Address": "Dr The Mission Director, [REDACTED]",
"Consignee Address": "CHEMONICS INTERNATIONAL INC, ATT: ACC PAYABLE, GLOBAL HEALTH SUPPLY CHAIN (PSM), 1275 New Jersey Ave SE, Suite 200, WASHINGTON, DC, 20003 USA 20006, UNITED STATES OF AMERICA",
"Mode Of Shipment": "N/A",
"Product Lines": [
{
"Product Name": "KIT COBAS 58/68/8800 LYS, REAGENT IVD",
"Product Quantity": "49",
"Product Unit Price": "213.00",
"Product Line Total or Amount": "10,437.00",
"Manufacturer": "[REDACTED]"
},
{
"Product Name": "KIT COBAS 58/68/8800 MGP, IVD",
"Product Quantity": "165",
"Product Unit Price": "50.00",
"Product Line Total or Amount": "8,250.00",
"Manufacturer": "[REDACTED]"
},
{
"Product Name": "KIT COBAS 6800/8800 HIV 96T, IVD",
"Product Quantity": "5",
"Product Unit Price": "838.95",
"Product Line Total or Amount": "4,194.75",
"Manufacturer": "[REDACTED]"
},
{
"Product Name": "KIT COBAS 6800/8800 HIV 96T, IVD",
"Product Quantity": "313",
"Product Unit Price": "838.95",
"Product Line Total or Amount": "262,591.35",
"Manufacturer": "[REDACTED]"
},
{
"Product Name": "KIT COBAS 6800/8800 HIV 96T, IVD",
"Product Quantity": "65",
"Product Unit Price": "838.95",
"Product Line Total or Amount": "54,531.75",
"Manufacturer": "[REDACTED]"
},
{
"Product Name": "KIT COBAS HBV/HCV/HIV-1, CONTROL CE-IVD",
"Product Quantity": "72",
"Product Unit Price": "290.00",
"Product Line Total or Amount": "20,880.00",
"Manufacturer": "[REDACTED]"
}
],
"Invoice Total": "360,884.85",
"Banking Details": "[REDACTED]"
}
And remember you can always adjust the prompt to extract the different data you want on different documents & adjust how you want the data to be represented in the output. You can also often improve the output with more data specifications like "A PO number is always 2 letters followed by 8 digits. Only return those 2 letters & 8 digits."
Also if you are working with some Word/.docx files, there are built in OneDrive actions to convert them to .pdf files. So you should be able to process PDF, Image, and/or Word documents on the same type of set-up.
Also if you need something that can handle much larger files with a better page text filter/search set-up & larger GPT context window, check out this Query Large PDFs With GPT RAG template.
Remember, you may need AI Builder credits for the OCR & GPT actions in the flow to work. Each Power Automate premium licenses already come with 5000 credits that can be assigned to your environment. Depending on your license & organization, you may already have a few credits assigned to the environment.
If you are new, you can get a trial license to test things out: https://learn.microsoft.com/en-us/ai-builder/administer-licensing
Lastly, Microsoft recently started requiring approval actions after every GPT action. If you want to get around this requirement, see this post on setting the approval step to automatically succeed & move to the next action.
Version 1.7 simplifies some expressions. Download this version if you are just trying to initially understand the programming of the flow, & don't care as much about speed or efficiency.
Version 1.8 adds a PageNumbers compose action that allows one to input specific pages of a PDF or image packet to pass on to the text conversion & GPT prompt. This could be useful for scenarios where the relevant data is always on the 1st couple of pages or for scenarios where one must filter to only the relevant pages/images because the full packet of PDF page data or image data would exceed the GPT prompt token / character limit.
Version 2 redesigns the Convert to txt section of the flow to use several clever Select actions & expressions to avoid an additional level of Apply to each looping. So for an example 3 page document with 50 lines per page, instead of taking 15-20 seconds and 156 action calls, it takes 1 second and 21 action calls to create the text replica document.
This makes the entire flow 2X faster (15 seconds vs. 30 seconds) and 7X more efficient for daily action limits.
This makes some use-cases like real-time processing on a Power Apps document upload or processing of larger batches of documents each day much more viable.
Version 2.5 More changes to the Convert to txt component to create a little more accurate text replicas and a change to the placeholder prompt to make the message a little more concise & more accurate. Also moved the spaces & line-break into a single Compose called StaticVariables & changed the variable name to the now more accurate EachPage.
The Convert to txt piece now calculates the minimum X coordinate so it can subtract that number from all X coordinates & thus remove additional spaces on the left margin, helping to reduce the characters fed to the GPT prompt.
The Convert to txt piece also now has a ZoomX parameter in the StaticPageVariables action which sets the spaces multiple, or the number of spaces, per coordinate point. So for example, 200=More Accurate Text Alignment, but 100=Less GPT Tokens. So there may be some trade-offs here. (The recognize text bounding box coordinates around longer pieces of text seem to be dis-proportionatly larger than on smaller pieces of text & mess up the text alignment for rows/lines with multiple boxes / text entries.)
In addition, the Convert to txt piece will now include line-breaks for blank Y coordinate rows/lines to more accurately replicate the vertical spacing of pieces of text. I figured since each line should be just a line-break character, it shouldn't add much to the character / token count for the GPT prompt.
So overall 2.5 adds some better options for increased extraction accuracy or for decreased characters/tokens per page & thus for slightly larger file capacity.
Version 2.7 Another adjustment to the conversion from OCR coordinates to the text (txt) replica.
It now calculates the X coordinates of a piece of text from the mid-point between X coordinates 0 & 1. So along with the Y coordinates that were already being calculated from the mid-point between Y coordinates 0 & 3, this now registers the position of each piece of text from the center point of each coordinates box.
I also set it to start using an estimate of the length of text characters instead of the length of the overall coordinates box to calculate the whitespace / number of spaces between each piece of text.
Overall this makes this set-up even more accurate, improving text alignment, improving performance on more tilted pages, & adjusting the spacing/alignment for different font / text sizes on the same line.
Version 2.9 Adjustment For New MS Approval Requirement & Adjust Retry Policy
I added in the automatic approval step to get around the new MS approval action requirement. I also set the retry policy on the GPT action to retry every 5 seconds up to 7 times so it will fail less if wrongful 429 too many request errors occur.
If the standard import of the flow-only packages below do not work for you, you can also try importing the flows through a Power Apps solution package here: Re: Extract Data From PDFs and Images With GPT - Power Platform Community (microsoft.com)
Microsoft is deprecating the original Create text with GPT action this template relies on.
Users may need to use the new “Create text with GPT using a prompt” action & create a custom prompt on that action instead.
https://learn.microsoft.com/en-us/ai-builder/use-a-custom-prompt-in-flow
See this post for an example set-up: https://powerusers.microsoft.com/t5/Power-Automate-Cookbook/Extract-Data-From-PDFs-and-Images-With-G...
The ExtractPDFImageDataWithGPT_1_0_0_x Power Apps solution package contains a version of the flow where this is outlined.
Thanks for any feedback,
Please subscribe to my YouTube channel (https://youtube.com/@tylerkolota?si=uEGKko1U8D29CJ86).
And reach out on LinkedIn (https://www.linkedin.com/in/kolota/) if you want to hire me to consult or build more custom Microsoft solutions for you.
watch?v=mcQr-JsGj6Q
Hello there!
Thank you for this- I have it up and running. I utilized Attachment Control in PowerApps to create a user upload in-app for the PDF to extract the data from. By passing the uploaded PDF to Sharepoint, I slip it back to PowerAutomate to extract the text, and then a CSV file exports to the Sharepoint which then opens the CSV file in a new window.While I added some things to the beginning and end of this, all of the work happening in the middle is the same. This works very well.
For my particular use case this is only being used with one kind of PDF document. Due to this, I feel like I could greatly improve it's performance by cleaning the text before it gets fed to GPT. While I usually need all pages of these PDFs (and it is really not many pages, but the info on each page is dense,) there are ample other things that could be ignored. As of right now, many of my PDFs have too many characters to process on my license, so I am trying to come up with ideas to compress this down.
I have cleaned these in Excel before (it's time consuming!), but utilizing basic Find/Replace, TextSplit, and Concat did accomplish the goal. So I feel like I have a handle on the logic that I need, but... I am struggling on the best way to execute that in PowerAutomate.
I am assuming I will have to use text expressions like this below example, but I am unsure the best spot to stick these in, or how to get the cleaned text back to the flow.
https://digitalmill.net/2023/08/12/how-to-extract-and-clean-texts-with-power-automate/
Could you give me a suggestion on where to put this text clean up code in the flow, and how I get this newly cleaned text back into the original flow?
My entire team thanks you 😄
Microsoft is deprecating the original Create text with GPT action this template relies on. So even if some Azure GPT4 Turbo functionality does not outright replace this set-up, then this template will still require revisions.
You're not wrong... This one, the one that was used, has been- yes. https://learn.microsoft.com/en-us/ai-builder/azure-openai-model-pauto
In it's place I used this one, and got more or less the same result. I don't see anywhere that this one has also been deprecated. https://learn.microsoft.com/en-us/ai-builder/use-a-custom-prompt-in-flow
Yes, we can use custom prompts to get the same output. But a problem I am running into is the only way to share flow templates at the moment is in Solution packages. And if any custom prompt is added to a flow in a solution, then it fails to import that solution to any new environment. So without creating an updated video & guide to show manually creating the new custom prompt after importing, I don’t have an easy way to share an updated template.
I'm receiving the following error after testing out the program with an invoice.
It says "create text with GPT" is forbidden, and fails to complete the task
The error it gives: model not supported, Your model is using preview features that are no longer supported
{
"statusCode": 403,
"headers": {
"Cache-Control": "no-cache",
"x-ms-service-request-id": "46049aa8-6535-46ac-b721-523063d0d3b1,9248d3d1-9e7d-4969-b95f-6745d59d25b0",
"Set-Cookie": "ARRAffinity=aabb61de717da66ea2a84aa79aff16cf240de68df2c33a738f3b1fdd255847db15134d20c556b0b34b9b6ae43ec3f5dcdad61788de889ffc592af7aca85fc1c508DC05A0F01305CA683646862; path=/; secure; HttpOnly,ReqClientId=4dc68089-2d61-477d-ac56-1dc608c1ac62; expires=Mon, 25-Dec-2073 22:56:36 GMT; path=/; secure; HttpOnly,ARRAffinity=aabb61de717da66ea2a84aa79aff16cf240de68df2c33a738f3b1fdd255847db15134d20c556b0b34b9b6ae43ec3f5dcdad61788de889ffc592af7aca85fc1c508DC05A0F01305CA683646862; path=/; secure; HttpOnly",
"Strict-Transport-Security": "max-age=31536000; includeSubDomains",
"REQ_ID": "9248d3d1-9e7d-4969-b95f-6745d59d25b0,9248d3d1-9e7d-4969-b95f-6745d59d25b0",
"CRM.ServiceId": "framework",
"AuthActivityId": "175c356f-2486-41ec-a382-f8c49a7398d0",
"x-ms-dop-hint": "4",
"x-ms-ratelimit-time-remaining-xrm-requests": "1,197.59",
"x-ms-ratelimit-burst-remaining-xrm-requests": "5998",
"OData-Version": "4.0",
"X-Source": "1512462421323232238114411091002527391551561399812774239014112819721126283521975216,14589231481331462220524831731010613122457442525623207158210821917518166158221206224",
"Public": "OPTIONS,GET,HEAD,POST",
"Date": "Mon, 25 Dec 2023 22:56:36 GMT",
"Allow": "OPTIONS,GET,HEAD,POST",
"Content-Type": "application/json; odata.metadata=minimal",
"Expires": "-1",
"Content-Length": "227"
},
"body": {
"error": {
"code": "0x80048d06",
"message": "{\"operationStatus\":\"Error\",\"error\":{\"type\":\"Error\",\"code\":\"ModelNotSupported\",\"message\":\"This scenario is not supported in this environment.\"},\"predictionId\":null}"
}
}
}
Any clue what the problem could be and how to solve this?
Yes, you need to use the new "Create text with GPT using a prompt" action and create a custom prompt in that new action in place of the old "Create text with GPT" action.
@takolota This solutions is amazing, thank you so much for creating it! Have you (or others) found a way to partition the text output? Sometimes the amount of text exceeds the limit of the GPT action. So it should be nice to be able to cut the input into parts and then feed it into GPT.
Exciting to hear about this option!
Thanks.
You can always use take( ) & skip( ) or chunk( ) expressions to cut up the text by character count.
Or you could copy part of the workflow to a parallel branch & use the page selection options to run different pages through different GPT actions.
Or there are some ways to limit the text extracted to only a specific area of each page using some fancier Filter array set-ups.
Hi @takolota , Hope you are doing well! I just have a query regarding the package that is attached.
I have imported the package and trying to understand the flow. But seems there is an error "Request failed with status code 404" in the step Create text with GPT.
Could you please help me on this error? Attached image for reference.
Now its available.. I tried and its working I guess