09-05-2023 17:03 PM - last edited 09-05-2023 17:23 PM
No Delegation Limit - SharePoint List Power App
This template app & flow set-up presents a method of filtering, searching, sorting, & loading a large SharePoint list data without delegation limitations.
https://youtu.be/EH-YndEPIiI?si=sy5Tb72c7amzeelX&t=94
(Each Filter & Search in the video is a new query of the datasource. It is operating over 20,000+ items in 1-14 seconds each time depending on how much the datasource has been narrowed down by filtering & searching.)
Usually one could not get a Power App to get more than 2000 items across a large SharePoint list with more advanced filters and searches, especially on non-standard fields like a Created By column, multiline text column, or on many columns at once. This template does that & more using a Power Automate flow and the ParseJSON( ) expression in Power Apps.
Say goodbye to SharePoint list delegation limitations & issues.
Load a much larger number of SharePoint records into a Power App, up to 100MB worth.
And avoid the SharePoint interface 5000 list view threshold with an app interface that can display many more than 5000 records on a screen. All without premium connectors.
Setting-Up A Demo App
Download the Excel workbook data & solution package at the bottom of the post.
First we are going to use the Excel workbook to import a large set of data into SharePoint.
Then go to the home page of the SharePoint site where you want to create the demonstration large SP list. Get the site address from the url.
Insert the site url into the address input of the Excel pop-up & enter "Large SP List" for the new list name. Select Next. Check over the columns on the next menu & select Finish.
That will start loading the Excel rows into a new SharePoint list on the site.
Next go to the main Power Apps page. Select Solutions on the left-side menu. Then select the Import Solution button.
On the next menu to appear select Browse, then find the solution zip file you downloaded from this post & select that zip file for the import. Select next on the following 2 menus.
On the following menu check that the connection listed is the one you want to use for this template app & flows.
Once the solution is successfully imported, find the solution in the table of solution display names & click its name. Then once in the solution, select Apps in the left side menu to see all the apps for this solution. Select the 3 vertical dots next to the Large SP List Delegation Workaround Demo app item & select Edit from the dropdown.
Once the Power Apps Canvas editor loads select the Power Automate logo on the left menu to bring up the flows menu. Then select Add flow. Then select each of the flows, Large SP List Query & Large SP List Query Sorts, to connect them to the app.
After connecting the flows, select the Datasource icon on the left side menu. Select Add data & search for SharePoint. Select the SharePoint option. Select a SharePoint connection.
Then on the right-side menu that pops up, select the site where you loaded the demonstration list. Then for the list name select the Large SP List from the options & select Connect.
Next, make sure the ParseJSON preview feature is active. So go to the 3 horizontal dots on the top command bar, select Settings. Then on the Settings pop-up menu, select Upcoming features, & scroll until you see the ParseJSON function and untyped objects entry. Make sure the toggle below the entry is turned to On.
Then go to the Tree View on the left-side menu & select the App input at the top of the other left-side menu. Select the OnStart parameter, extend the formula bar, & set the SiteAddress and ListName variables to use the site address url & list name of your SP list.
Then to load & view the items from the new flows & connections, on the tree view select the 3 horizontal dots next to the App input & Select Run OnStart to run the query flow to grab the initial set of items from the SP list.
Once the items load, then you are ready to start exploring.
There are a few things you may want to note...
The OnStart property runs the query flow with default & blank parameters when the app initially loads. It saves the string of the JSON array returned into a variable. Then the Items property for the main gallery with all the records makes a Table( ) from the ParseJSON( ) of the variable with the string of the JSON array.
Notice how each app component referencing something from the Gallery datasource must include an extra .Value before referencing the actual field name. This is because the table made from the JSON array actually only has one column called Value that then holds the JSON object for the entire record.
And Lookup & People columns actually have their own sub-object that they must reference, so it becomes .Value.LookupColumnName.LookupFieldName.
And again, see that the main gallery can use both filters & a search of all the columns in the large SP list to find items (including a multiline text column).
You can see on the OnSelect of the Search & Filter button that this is done by creating a set of Odata filter queries for the Get items section of the flow & by providing the flow with the search box input term that it uses in a Filter array action to search for that term across all the columns of each record. It then saves any newly filtered & searched output to the variable that is used for the main gallery's ParseJSON( ) & Table( ) expressions.
These resources may help in making dynamic odata queries: 1. https://www.youtube.com/watch?v=I8FdUmECAn8 | 2. https://www.spguides.com/power-automate-odata-filter-query/ |
Note that odata filters can use related lookup column too, like Author/EMail.
The template is also already set to navigate to a details screen with an edit form whenever a user selects an item Title value. The Item property of the edit form shows how the app leverages the direct connection to the SharePoint list & the main gallery built off the JSON array of SP data from the flow. It uses the ID field that came with the JSON array flow data of the selected item in the gallery to then filter the direct SP list connection to just that item for the edit form. So if you ever adjust this template for your own use, make sure to include the SharePoint ID column & values in the OnStart SelectColumns variable input of the query flow. That way you can use this set-up to make editing data the same as any other Canvas Power App where you use a direct datasource connection on a form.
Also...
If you are trying to display the values of a multiple choice field using this method, see this post.
If you are trying to GroupBy( ) on a large number of items using this method, see this post.
Thanks for any feedback,
Please subscribe to my YouTube channel (https://youtube.com/@tylerkolota?si=uEGKko1U8D29CJ86).
And reach out on LinkedIn (https://www.linkedin.com/in/kolota/) if you want to hire me to consult or build more custom Microsoft solutions for you.
Version 1.5 Speed & Ease of Use Adjustments
Changes the flow query app inputs to accept more parameters so less edits need to happen in the flow. Things like SiteAddress & ListName can now be edited from the Canvas App builder. Developers can now input a SelectColumns query to further narrow down the data they are querying to only the columns they list, which improves performance on larger data reads.
The default read batch size was also reduced from 5000 to 2500 and the default batch reading Apply to each loop concurrency was increased. This also improves the data read speed a little more.
And some additional odata columns are now removed early in the flow before it gets to other actions or to the app. This should make another small speed improvement & reduce the total data size transferred to the app.
Version 1.7 Expand Parameter for Lookup & People Columns
The flow inputs on the app now include a parameter ExpandColumns where developers can input the Lookup or Person ColumnName/ID in a comma separated list to identify which Lookup or Person columns should include their related fields. Then each related field needs to also be listed in the SelectColumns flow parameter for it to be included in the outputs. For example for the Created By & Modified By columns, ExpandColumns includes "Author/ID,Editor/ID" as those are the column back-end names & ID references. Then the SelectColumns parameter includes "Author/Title,Author/EMail,Editor/Title,Editor/EMail" so the query knows to get the Created By & Modified By Title (display name) & email values.
Also further reduced the ReadBatchSize from 2500 to 2000 to try to get a little more query speed. And adds an app flow parameter to adjust the batch size from the app itself.
Version 2.1 Large Reduction in Action API Calls Per Query
I started doing some scaling calculations from the perspective of the most likely users of an app with such a set up. Office A1-E5 licensed users get 6000 action API calls per day per license across Power Apps & Power Automate (https://learn.microsoft.com/en-us/power-platform/admin/api-request-limits-allocations). If this set-up is used on a 1 million item SP list with a 5000 ReadBatchSize, then each flow query in previous versions would take (1,000,000 / 5,000) * 4 = 800 action API calls. So each of those standard licenses would have only had 6000 / 800 = 7.5 queries a day on a 1 million item list before it exhausted their daily action limit. I've gone back through & cut the number of actions in the most intensive part of the query in half & used some other adjustments to keep the overall speed/performance of the query the same. So with this version 2 the average A1-E5 license should be able to query a 1 million item dataset up to 15 times per day or a 100,000 item dataset 150 times per day with a 5000 ReadBatchSize before hitting their daily action API call limits.
Now, one can always increase users' total queries / reduce the API calls used by reducing the MaxItemCheck parameter in the app, but the trade-off is the lower the MaxItemCheck the less most recent items the queries will filter & search over. So if you have something like a 10 million item list & you are fine with user queries only searching over the most recent 250,000 items, then you can reduce the MaxItemCheck to around 250,000 & all standard license users will then get up to 60 queries per day on the 250,000 most recent items of that 10 million item dataset.
Version 2.3 More Performance & Speed Improvements
I figured out the main thing holding back the query performance isn't the amount of calculations involved, it's just the amount of memory involved moving any large block of data between the actions after the SharePoint HTTP Get items loops. So reducing the number of calculations in the expressions for those actions won't do as much as just removing as many actions post-data load as possible, even if it means putting more complicated expressions in the remaining actions.
Also the odata = nometadata header parameter that most blogs suggest does not actually remove all the odata metadate. I've corrected the HTTP call healders to actually remove all the odata properties and this has also reduced the query time.
In testing I now have the 20,000 item example data loading in 7-11 seconds and filters & decent searches loading in 1-5 seconds. Bad searches that return most of the list items are the only queries that may see performance issues & can go on for 18 seconds. This is because the flow query is optimized to increase speed by using more action API calls when it anticipates a large load without a search term, but if given a search term, it expects that search to significantly reduce the amount of data going through the flow so it uses a method that is slower but that uses significantly less action API calls.
Version 2.8 Nested Sorting & Fix To Sort On Null Values
The flows can now take an additional column name value for nested sorting. So if you have a lookup or person column then you can input the name of the lookup or person column for the 1st value, then input its nested column name for the 2nd value. This helps sort on things like Created By columns because now it can take in "Author" as the 1st column value and "Title" as the second column value. However, nested column sorting will take longer on larger queries. For the best large query speeds avoid sorting on lookup or person columns.
Next, the flow(s) will fail if the Sort( ) expression is attempted on a column with null values. I have input an expression to replace any null values with a blank text string "". So the flow and app can sort on any string/text column with or without null/blank values. But if one tries to sort on any number type column that contains null values, then the flow & app will error.
Anyone who has issues with the Solution import can also try using the legacy Power Apps & Power Automate import methods to import each component of the solution here.
watch?v=EH-YndEPIiI
Additional thread & video reviewing parts of the "Large SP List Query" flow: Faster SharePoint List Filter And Search - Power Platform Community (microsoft.com)
Hi ,
I tried your code , it is loading 20000 records on Gallery more than 20 sec . Do I need to do anything to load 20000 Records on gallery within 10 sec ?
Maybe my 8-15 second loads on my initial demo were faster as I was doing things on off-hours on a holiday weekend. Probably pretty low traffic for my region's servers.
I did start getting 20-25 second load times today.
But I just updated the post with Version 1.4 which allows a SelectColumns input for the query to further narrow down the data being read to fewer columns.
Also I reduced the default read batch size from 5000 to 2500 and I increased the default batch reading Apply to each loop concurrency to 20. That should somewhat improve the data read times for the 20,000+ item queries.
I started to get 15-18 second read times for the 20,000 items after these changes. Queries with no filters, just a search term are completing in 4-15 seconds. And any query with a filter added is still completing in 2-8 seconds.
I also updated the description of the main post to reflect the performance of 2-19 seconds instead of 10 seconds.
Maybe my 8-15 second loads on my initial demo were faster as I was doing things on off-hours on a holiday weekend. Probably pretty low traffic for my region's servers.
I did start getting 20-25 second load times today.
But I just updated the post with Version 1.4 which allows a SelectColumns input for the query to further narrow down the data being read to fewer columns.
Also I reduced the default read batch size from 5000 to 2500 and I increased the default batch reading Apply to each loop concurrency to 20. That should somewhat improve the data read times for the 20,000+ item queries.
I started to get 15-18 second read times for the 20,000 items after these changes. Queries with no filters, just a search term are completing in 4-15 seconds. And any query with a filter added is still completing in 2-8 seconds.
I also updated the description of the main post to reflect the performance of 2-19 seconds instead of 10 seconds.
@suri455
Sorry, for some reason Power Automate didn't save / loaded an earlier version of the flow when I updated the expressions to use the new Power Apps flow inputs. I had to re-apply those expression updates & re-export to Version 1.5
Version 2.1 Large Reduction in Action API Calls Per Query
I started doing some scaling calculations from the perspective of the most likely users of an app with such a set up. Office A1-E5 licensed users get 6000 action API calls per day per license across Power Apps & Power Automate (https://learn.microsoft.com/en-us/power-platform/admin/api-request-limits-allocations). If this set-up is used on a 1 million item SP list, then each flow query in previous versions would take (1,000,000 / 5,000) * 4 = 800 action API calls. So each of those standard licenses would have only had 6000 / 800 = 7.5 queries a day on a 1 million item list before it exhausted their daily action limit. I've gone back through & cut the number of actions in the most intensive part of the query in half & used some other adjustments to keep the overall speed/performance of the query the same.
So with this version 1.9 the average A1-E5 license should be able to query a 1 million item dataset up to 15 times per day or a 100,000 item dataset 150 times per day with a 5000 ReadBatchSize before hitting their daily action API call limits.
Now, one can always increase users' total queries / reduce the API calls used by reducing the MaxItemCheck parameter in the app, but the trade-off is the lower the MaxItemCheck the less most recent items the queries will filter & search over. So if you have something like a 10 million item list & you are fine with user queries only searching over the most recent 250,000 items, then you can reduce the MaxItemCheck to around 250,000 & all standard license users will then get up to 60 queries per day on the 250,000 most recent items of that 10 million item dataset.
@takolota I tested your flow against a similar flow I built explicitly for downloading large amounts of data and it was almost neck and neck. I also noticed that adding the $expand parameter in this version has made your flow much more straightforward.