01-08-2020 11:44 AM - last edited 04-05-2023 08:34 AM
Title: Import CSV File
I have created a csv import 2.0
It is smaller and simpler.
Find it here
Description: This flow allows you to import csv files into a destination table.
Actually I have uploaded 2 flows. One is a compact flow that just removes the JSON accessible dynamic columns.
You can easily access the columns in the compact version by using the column indexes . . . [0],[1],[2], etc...
Compact Version:
This step is where the main difference is at. We don't recreate the JSON, we simply begin to write our data columns by accessing them via column indexes (apply to each actions). This will cut down in the run time by even another 30% to 40%.
Detailed Instructions:
** If you need to add or remove columns you can, follow the formatting and pattern in the "append json string items" and anywhere else columns are accessed by column indexes with 0 being first column -- for example "variables('variable')[0]" or "split(. .. . )[0]", etc.. .
Import the package attached below into your own environment.
You only need to change the action where I get the file content from, such as sharepoint get file content, onedrive get file content.
My file in the example is located on premise.
When you change the get file content action it will remove any references to it, this is where it belongs though.
Please read the comments within the flow steps, they should explain the using this flow.
Be sure to watch where your money columns or other columns with commas fall as they have commas replaced by asterisk in this flow, when you write the data you need to find the money columns to remove the commas since it will most likely go into a currency column.
Also check the JSON step carefully since it will let you access your columns dynamically in the following steps.
You will need to modify the JSON schema to match your column names and types, you should be able to see where the column names and types are within the properties brackets.
In this step of you JSON notice my values have quotes because mine are all string type, even the cost.
If you have number types remove the quotes (around the variables) at this step where the items are appended and you probably need to replace the comma out of the money value (replace with nothing).
This step will control how many records you are going to process.
The flow is set up to process 200 rows per loop and should not be changed or due to the nesting of loops it may go over the limit.
It will detect how many loops it needs to perform. So if you have 5,000 rows it will loop 25 times.
You should change the count though. Make sure the count is over the number of loops, the count just prevents it from looping indefinitely.
Note: This flow can take a long time to run, the more rows with commas in the columns the longer.
9-25-20 - A new version of the flow is available, it is optimized and should run 40% to 50% faster.
Questions: If you have any issues running it, most likely I can figure it out for you.
Anything else we should know: You can easily change the trigger type to be scheduled, manual, or when a certain event occurs.
The csv file must be text based as in saved as plain text with a csv extension or txt in some cases.
Note: any file with csv extension will probably show with an excel icon.
The best way to find out if it will work is to right click it on your computer and choose to open it in word pad or note pad. If you see the data it will work. If you see different characters it is an excel based file and won't work.
An excel based file is of type object and can't be read as a string in this flow.
You can easily convert it by saving excel file as CSV UTF-8 comma delimited in save as options.
It should also work on an excel file without table format as long as you convert to csv the same way and same extension requirement.
** If the file is on sharepoint go ahead and save as csv utf-8 and then change the extension to .txt or sharepoint will force it to open as an actual excel spreadsheet file.
you may also need to use .txt extension from other "get file contents" besides sharepoint. I know for a fact on premise can stay as .csv extension.
My sample run of 12000 rows, this sample has 10 columns, you can have any number of columns.
Thank you for the quick response @juresti
Can flow create any files? I can save the data in either .csv, .tsv, .xls, .xlsx, .xml, or .txt format to import into the routing software I use.
Yes, you can create any text based type of file such as csv, tab delimited, xml, htm, and text and some others.
Please forgive my ignorance, I am fairly new to power automate and there are still many things I do not understand.
Firstly, I have managed to cut my requirements down from 3 to 2 .csv files [Corporate.CSV and Online.CSV], which should hopefully simplify things a little.
But I am struggling with a few things.
I have never worked with JSON before, and I am not sure which portions of your flow I need to edit in order to replace the schema.
For my output file, I have 8 Columns...
A = ID
B = Phone
C = Name1
D = Name 2
E = Address1
F = City
G = PostCode
H = Notes
As I said, when it comes to the import files, I have two very different schema that I want to pull from
From Corporate.CSV, which has a total of 15 Columns (A-O); I need to import columns in the order B,F,D,E,G,H,J,L to match my output schema
From Online.CSV, which has a total of 46 Columns (A-AU); I first need to delete any Rows where Column P contains the data "Pickup". Then I need to import columns in the order A,W,AD,AE,AF,AI,AH,AM to match my output schema.
It is possible that I may not have any data to import from either one of the two .CSV files on a given day, so I would like to keep both files "optional" at runtime.
Once the data is written to the new schema, I need to output to a Date/Time Stamped .CSV file.
What you need to do requires only a few changes.
You can have it prompt for a file. Add an input to the trigger and choose File.
You will need to give the ID of the file to the "Get file content" step.
This will need to be all manual entry
/Shared Documents/Path/ formula(triggerBody()['file']['name'])
Which needs to be like this
%252fShared%2bDocuments%252f. . .. . .
Convert / to %252f
Convert spaces to %2b
Enter the file name by using the formula
The JSON also only requires a few changes.
Just count your columns instead of A,B,C,D.... use 0,1,2,3,4.. . . .
You will use the 0,1,2,3,4 to select the columns in the split formula.
This way you can also select only the columns you need and the order you need.
Add and remove columns as needed.
Of course col0, col1, col2 can be renamed to any column name you need.
If your files columns are different I would copy the flow and make 2 since this version uses JSON.
To filter the "Pickup" just make a new filter after the Parse JSON 2 near the end in retrieve files.
Then you will need to change the input of the following step (process each data set) to use the new filter output.
To create the final file after creating the new csv schema do it after this step.
Instead of compose 3 you will be appending column,column,column,column <Return> . .. . . to a variable that you need to create.
Press enter at the end of the append so it creates a new line.
Once it is all appended you will have a csv data. Use it to create your file and include the utcnow() formula in the name for timestamp.
Be sure to "reset" the variable where you append the data so it is empty for the next set of data.
Hello
I have created a csv import 2.0
It is smaller and simpler.
Find it here
Hello Everyone,
I may have a simpler & more efficient set-up here: https://powerusers.microsoft.com/t5/Power-Automate-Cookbook/CSV-to-Dataset/td-p/1508191