Programming

[C#] Paging Dynamics CRM WebAPI with fetchXml

Recently I struggled a lot on how to correctly implement paging of queries against Dynamics CRM Web API. The API returns no more than 5000 items, and any subsequent items are only available after making another request with prepared parameters. What is usually pretty easy with Graph API (dedicated property with a link to the next page) is a bit clumsy with the older API.

In my scenario, I was trying to combine the paging with the usage of so-called fetchXml syntax. There is some minimal documentation available, but it does not explain all gotchas, and the samples require the usage of external pre-compiled libs (no .NET 5.0 available).

The basic usage of fetchXml is described in the following article: Use FetchXML to query data (Microsoft Dataverse) – Power Apps | Microsoft Docs. In this blog post I present how I did this with C# / NET 5.0, there is also a link to a more complex demo app on my GitHub:

https://github.com/marcinotorowski/dynamics-crm-webapi-demo

Let’s assume we have an endpoint and required OAuth2 token to authorize our request. The first thing to do is to start with the XML for fetch query. There are tons of documents and samples, a sample document may look like that:

<fetch mapping="logical" page="1">
    <entity name="audit">
        <attribute name="objectid" alias="objectid" />
        <attribute name="createdon" value="FormattedValue" />
        <link-entity name="systemuser" to="objectid" link-type="inner">
            <attribute name="fullname" />
        </link-entity>
        <filter type="and">
            <condition attribute="action" operator="eq" value="64" />
        </filter>
    </entity>
</fetch>

This essentially returns the list of login audits.

To make a query, strip the XML from spaces and white characters and URL-encode it. Then, assign the value to the URL parameter fetchXml and perform the following query:

https://<name>.crm4.dynamics.com/api/data/v9.1/audits/?fetchXml=%3Cfetch%20mapping%3D%22logical%22%20page%3D%221%22%3E%3Centity%20name%3D%22audit%22%3E%3Cattribute%20name%3D%22objectid%22%20alias%3D%22objectid%22%20%2F%3E%3Cattribute%20name%3D%22createdon%22%20value%3D%22FormattedValue%22%20%2F%3E%3Clink-entity%20name%3D%22systemuser%22%20to%3D%22objectid%22%20link-type%3D%22inner%22%3E%3Cattribute%20name%3D%22fullname%22%20%2F%3E%3C%2Flink-entity%3E%3Cfilter%20type%3D%22and%22%3E%3Ccondition%20attribute%3D%22action%22%20operator%3D%22eq%22%20value%3D%2264%22%20%2F%3E%3C%2Ffilter%3E%3C%2Fentity%3E%3C%2Ffetch%3E

This is pretty easy with a few functions from HttpUtility class and XDocument with C#. Then, a standard GET request is performed, (with bearer authorization header). The API replies with JSON that has two imporant properties:

  • value – contains a JSON array of returned elements
  • @Microsoft.Dynamics.CRM.fetchxmlpagingcookie – contains the pagination cookie if there is at least one page more to come, otherwise the property is not set.

Once the paging cookie is present, we need to take its value, unencode it (warning – it seems it has to be done twice, see source code in GitHub!) and then embed in all subsequent queries. The current page as indicated by fetchXml should be also increased by one.

The relevant code in GitHub repo:

/// <summary>
/// Returns the URL pointing to the next chunk of items.
/// </summary>
/// <param name="cancellationToken">The cancellation token.</param>
/// <returns>A hot running task that returns the URL to the next page.</returns>
/// <remarks>If this method returns null, then it means that there are no more pages to query.</remarks>
public async Task<Uri> GetNextPageUrlAsync(CancellationToken cancellationToken = default)
{
    if (this._parsedJsonResponse == null)
    {
        await this.ReadContentAsync(cancellationToken).ConfigureAwait(false);
    }
    
    var pagingCookie = this._parsedJsonResponse[FetchXmlPagingCookie]?.Value<string>();
    if (string.IsNullOrEmpty(pagingCookie))
    {
        return null;
    }

    cancellationToken.ThrowIfCancellationRequested();
    var pagingCookieResponseHtml = HttpUtility.HtmlDecode(pagingCookie);

    var html = XElement.Parse(pagingCookieResponseHtml);

    // The cookie is stored in the attribute "pagingcookie"
    pagingCookie = html.Attribute(PagingCookieHtmlResponse)?.Value;

    // The value must be decoded twice!
    pagingCookie = HttpUtility.UrlDecode(pagingCookie);
    pagingCookie = HttpUtility.UrlDecode(pagingCookie);

    var queryParse = HttpUtility.ParseQueryString(this._request.Query);

    // Let's take the current fetchXml from the URL and upgrade it with new cookie information
    var currentFetch = queryParse.Get(FetchXmlUrlParam);

    if (currentFetch == null)
    {
        throw new InvalidOperationException("fetchXml must be present in the query at this point.");
    }

    cancellationToken.ThrowIfCancellationRequested();
    var bodyAsXml = XElement.Parse(currentFetch);
    bodyAsXml.SetAttributeValue(PagingCookieHtmlRequest, pagingCookie);

    var page = int.Parse(bodyAsXml.Attribute(PageHtml)?.Value ?? "0");
    bodyAsXml.SetAttributeValue(PageHtml, page + 1);

    var newQuery = "?" + string.Join("&", queryParse.AllKeys.Select(paramName =>
    {
        string escapedName, escapedValue;

        if (paramName == FetchXmlUrlParam)
        {
            // if the parameter is fetchXml, we update it with a new value (encoded).
            escapedName = HttpUtility.UrlEncode(paramName);
            escapedValue = HttpUtility.UrlEncode(bodyAsXml.ToString(SaveOptions.DisableFormatting));
        }
        else
        {
            // for any other URL param just take it as-is.
            escapedName = HttpUtility.UrlEncode(paramName);
            escapedValue = HttpUtility.UrlEncode(queryParse[paramName]);
        }

        return $"{escapedName}={escapedValue}";
    }));
    
    cancellationToken.ThrowIfCancellationRequested();

    // Now build the new URI
    var uriBuilder = new UriBuilder(this._request)
    {
        Query = newQuery
    };

    return uriBuilder.Uri;
}

And the rest is just a matter of standard parsing of the response stream, in a loop for each page.

while (pageUri != null)
            {
                Console.WriteLine("{0} Getting page #{1} with no more than {2} entries...", stopWatch.Elapsed, page++, perPage);
                using var response = await request.RequestAsync(pageUri);
                try
                {
                    var elements = await response.GetElementsAsync<DynamicsAuditLog>();
                    Console.WriteLine("{0} Received {1} entries.", stopWatch.Elapsed, elements.Count);
                    allElements.AddRange(elements);
                    pageUri = await response.GetNextPageUrlAsync();
                }
                catch (Exception e)
                {
                    Console.WriteLine("{0} Error: {1}", stopWatch.Elapsed, e);
                    return 1;
                }
            }

Issues I had

  • Most of time spent on trying to understand why the authorization with bearer token does not work (HTTP 401). The problem was with the configuration – in my scenario, I had to make sure that there was a proper assignment of the registered app to an application user. This is done in directly in Dynamics CRM, not in Azure Portal.
    • The gotcha: Creation of application user was difficult due to some UI changes introduced in Dynamics CRM. The interface for settings is not obvious to find, and many internet sources claim it was somewhere else than where I actually found it. In my case, to find the UI I had to type the dynamics URL followed by /Main.aspx as the URL, and from the newly opened panel find my way though settings to set-up the user correctly.
  • Paging cookie has to be URL-decoded twice. No idea way.

Links

Posted by Marcin Otorowski in Programming, 0 comments

Change UI language in Visual Studio

The Visual Studio Installer for VS2019 automatically pulls the language package that matches the current Operating System. My Windows 10 Operating System language is currently German – there is a thing that always keeps me wondering:

Why is the Tools menu translated to Extras in German Visual Studio?

This and some more pecularities (especially important when working in pairs with non-German speakers) may be a good reasons to switch to old good English UI. Here is how to do it for an already installed Visual Studio 2019 instances (you can obviously apply this for any currently used language).

If target language pack has been already downloaded…

The only thing you have to do is to locate the correct settings. Go to Extras -> Optionen -> Umgebung -> Internationale Einstellungen and select the language from the list.

If target language pack has not been yet downloaded…

  1. Close all Visual Studio windows
  2. Run the Visual Studio installer. You can do this by going to Add/Remove Programs (Programme und Features) and selecting the right item. Press Ändern to bring the setup window.
  3. Go to the second tab and enable English language. You can disable German one if you do not plan to switch to it in future. Leaving just only new language enabled will automatically switch VS to that language. Press Ändern to apply the changes.
  4. In case you didn’t uninstall previous languages first, then after restarting you may need to go to Settings to change to the newly downloaded language pack. Simply go to Extras -> Optionen -> Umgebung -> Internationale Einstellungen and select the language from the list.

Installing language packs from command line

It is possible to automate installing language packs, by using the following command line (executed in directory where the installer resides, usually C:\Program Files (x86)\Microsoft Visual Studio\Installer):

vs_installer.exe modify --addProductLang En-us --path install="C:\Program Files (x86)\Microsoft Visual Studio\2019\Community" --quiet --wait 

Note: The full installation path is required for this to work. I am using VS 2019 community installed in the default location, but you may update the path accordingly.

Instead of En-us for American English, any valid identifier from this list works too.

Posted by Marcin Otorowski in Programming, 0 comments

Displaying installed MSI products with a PowerShell module

Based on my previous post, I created a small project on my github and adjusted existing codebase to provide a simplistic PowerShell module. The code has been reorganized into proper units with exported functions. Here is a sample usage:

$Header = @"
&amp;lt;style&amp;gt;
TABLE {border: 1px solid gray collapse; }
TH { border: 1px solid gray; padding: 6px 4px; background-color: #eaeaea; }
TD { border: 1px solid gray; padding: 4px; }
&amp;lt;/style&amp;gt;
"@
Import-Module MsiClient
Get-MsiClientPackage | where { $_.Publisher -contains 'Microsoft Corporation' } | sort-object -Property ProductName | ConvertTo-Html -Property ProductCode,PackageName,ProductName,VersionString,Language,Publisher -Head $Header | Out-File c:\temp\test.html

The above snippet produces a HTML based report, showing all Microsoft products available on the current system.

There are still lots of TODOs, which I am going to cover soon, for example:

  • More methods supported (install, uninstall, repair etc.)
  • Better exception handling
  • Support for -WhatIf and -Confirm switches

In any case, my design goal is to keep the syntax similar to functions available in the AppVClient module, so that ideally for basic scenarios they have a 1-1 mapping in the MSI counterpart.

Link to my guthub (the project is licensed under MIT):
https://github.com/marcinotorowski/PowerShellMsiClient

Posted by Marcin Otorowski in Programming, 0 comments

Enumerating installed MSI products with PowerShell and msi.dll

If you were ever wondering how to properly read the list of installed MSI software, then two popular choices are available:

  • Querying uninstall registry keys (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall), filtering out-non MSI entries and outputting the rest
  • Using Win32_Product from WMI

They both have their pros and cons. Querying registry is straightforward on its own, but requires awkward manipulations and accessing the data which is actually backing the Add/Remove Applet, not necessarily the Windows Installer API which uses its own complex registration. Additionally it may not work correctly with different installation context (user/machine) and you may have to query two places to get both x86 and x64 installations on a x64 system.

On the other hand, while WMI query is also pretty straightforward (see https://msdn.microsoft.com/en-us/library/aa394378(v=VS.85).aspx?f=255&MSPPError=-2147217396), it has a really big drawback. Running it is painfully slow, because Windows Installer checks integrity of each entry and triggers appropriate action (for example repair) if necessary.

So to have a solution which is both fast, reliable and without any side-effects, you may go for a third solution which is more complex, but once setup can be reused not only for querying but for a whole management of MSI-based installations. And so this blog today will be about P/invoking native msi.dll to get results returned by the true Windows Installer API.

This post may be too technical if you have never programmed in C/C++ or C#. If you just want the results without understanding how to implement them on your own, scroll to the bottom, the full content of the PowerShell script is there.
Continue reading →

Posted by Marcin Otorowski, 2 comments