Title: Import CSV File
I have created a csv import 2.0
It is smaller and simpler.
Find it here
Description: This flow allows you to import csv files into a destination table.
Actually I have uploaded 2 flows. One is a compact flow that just removes the JSON accessible dynamic columns.
You can easily access the columns in the compact version by using the column indexes . . . [0],[1],[2], etc...
Compact Version:
This step is where the main difference is at. We don't recreate the JSON, we simply begin to write our data columns by accessing them via column indexes (apply to each actions). This will cut down in the run time by even another 30% to 40%.
Detailed Instructions:
** If you need to add or remove columns you can, follow the formatting and pattern in the "append json string items" and anywhere else columns are accessed by column indexes with 0 being first column -- for example "variables('variable')[0]" or "split(. .. . )[0]", etc.. .
Import the package attached below into your own environment.
You only need to change the action where I get the file content from, such as sharepoint get file content, onedrive get file content.
My file in the example is located on premise.
When you change the get file content action it will remove any references to it, this is where it belongs though.
Please read the comments within the flow steps, they should explain the using this flow.
Be sure to watch where your money columns or other columns with commas fall as they have commas replaced by asterisk in this flow, when you write the data you need to find the money columns to remove the commas since it will most likely go into a currency column.
Also check the JSON step carefully since it will let you access your columns dynamically in the following steps.
You will need to modify the JSON schema to match your column names and types, you should be able to see where the column names and types are within the properties brackets.
In this step of you JSON notice my values have quotes because mine are all string type, even the cost.
If you have number types remove the quotes (around the variables) at this step where the items are appended and you probably need to replace the comma out of the money value (replace with nothing).
This step will control how many records you are going to process.
The flow is set up to process 200 rows per loop and should not be changed or due to the nesting of loops it may go over the limit.
It will detect how many loops it needs to perform. So if you have 5,000 rows it will loop 25 times.
You should change the count though. Make sure the count is over the number of loops, the count just prevents it from looping indefinitely.
Note: This flow can take a long time to run, the more rows with commas in the columns the longer.
9-25-20 - A new version of the flow is available, it is optimized and should run 40% to 50% faster.
Questions: If you have any issues running it, most likely I can figure it out for you.
Anything else we should know: You can easily change the trigger type to be scheduled, manual, or when a certain event occurs.
The csv file must be text based as in saved as plain text with a csv extension or txt in some cases.
Note: any file with csv extension will probably show with an excel icon.
The best way to find out if it will work is to right click it on your computer and choose to open it in word pad or note pad. If you see the data it will work. If you see different characters it is an excel based file and won't work.
An excel based file is of type object and can't be read as a string in this flow.
You can easily convert it by saving excel file as CSV UTF-8 comma delimited in save as options.
It should also work on an excel file without table format as long as you convert to csv the same way and same extension requirement.
** If the file is on sharepoint go ahead and save as csv utf-8 and then change the extension to .txt or sharepoint will force it to open as an actual excel spreadsheet file.
you may also need to use .txt extension from other "get file contents" besides sharepoint. I know for a fact on premise can stay as .csv extension.
My sample run of 12000 rows, this sample has 10 columns, you can have any number of columns.
I also have a tab delimited version of this flow. @charliecantflow
Personally, I prefer it since it is plain text there are less issues with commas.
tab delimited
https://powerusers.microsoft.com/t5/Power-Automate-Cookbook/Import-Tab-Delimited-File/m-p/577499#M96
Hi Juresti,
Thats not quite right, I have tried putting one of my mid sized csv reports in to your flow, which looks excellent, but unfortunately, it has hit a 5000 row limit, my csv has 11,000 rows in it!
Hello, Looks I didn't test my flow well enough. I only run less than 1000 rows on it.
You would need something like this where sets of 5000 are written to text files and then those sets are read back for processing all dynamically.
I will update the flow soon to include these steps
In this flow I read N number of rows at a time
write each set to a text file
then read each set back in and process the rows / columns using json
the loop is the one limited to 5000, however if the loop repeats then the count is reset.
so this loop is a do until, currently it will run 5 loops.
That's 5 x 5000 = 25000 records. I would break it up smaller though to be sure it runs.
I would do 5 loops x 3000 records (get items) each loop = 15000 records. It can be set as needed.
Now as we open each file back up, each file only has 3000 records therefore the loop within the loop will not over limit.
So it is safer with a lower number since we have a nested loop here.
Looking forward to your updated flow! Thanks for all your hard work Juresti!
Hi @juresti and @seadude ,
great work you guys put into this and thank you so much for sharing!
In my Use case I want to have users upload a .csv which then is to be extracted in the manner @seadude has successfully laid out. By the end of the flow I however want the extracted rows to be responded back to Power Apps into a collection so I can work futher on in there.
Whenever use the respond to PA- action ('text') in Flow all I receive as an answer in my PowerApps collection is 'true' instead of the rows I extracted.
Is it possible to achieve what I am looking for?
Thank you for your responses in advance.
BR
JH
@JHL , you should be able to send just about anything back to Power Apps, including files.
If you are sending it as text, you can then have a foreach in power apps that will split each line into columns back into a table where you can further process it. This would be similar to a collect where you have to specify { column : "value" }
It is probably not going to populate a data source automatically since it is a string of text.
You can also check the flow run to see exactly what was sent to Power Apps, if the action doesn't show you can always compose the data you are sending.
I don't have a live example, but I have tested sending data to power Apps that way.
This flow is great! Nice work!
Could this have a continuation where data imported from the csv file is appended into an excel table stored in SharePoint?
Thanks
Hello @Zrt99
The flow is already set up where you write the data where you need.
In the compact version use the "retrieve each file" "apply to each" step
in the json version use the "process each data set" step
You can add your append step below these steps.
in compact version access the columns via column indexes like the example compose action
in the json version you can select them from the drop down list dynamically
Note the excel table limitations (last time I used one). The file and table must exist at time of setting up your flow.
I have also experienced "file lock" issues with excel.
how deal with the daily limits with all this looping ?