I'm Joris "Interface" de Gruyter. Welcome To My

Code Crib

All Blog Posts - page 2

Page: 2 of 15

Feb 27, 2021 - Use The New Packaging in the Legacy Build Pipeline

Filed under: #daxmusings #bizapps

The legacy pipeline from the build VM has its own PowerShell script that generates the packages. However, it always puts the F&O platform version into the package file name which can make it more difficult to use release pipelines or including ISV licenses into your packages since the version number changes with each update, requiring you to update your pipeline settings (and finding out the actual build number to use). Continue reading below or watch the YouTube video to learn how to swap the packaging step from the legacy pipeline with the Azure DevOps task which lets you specify your own name for the deployable package zip file. You can find the official documentation on the packaging step here.

  Read more...

Jan 27, 2021 - ISV Licenses in Packages

Filed under: #daxmusings #bizapps

ISV licenses for Dynamics 365 F&O can only be applied using deployable packages. There are ISV license packages that only contain a license, and there are combined packages that have both the binaries as well as the license. But now with all-in-one packages on self-service environments, you can only apply the license as part of an all-in-one package. So what are your options? Check out my YouTube video and/or read on for more details.

  Read more...

Jan 23, 2021 - Updating The Legacy Pipeline for Visual Studio 2017

Filed under: #daxmusings #bizapps

With the upcoming April 2021 release, support for Visual Studio 2015 will be dropped. If you’re building your code using a build VM deployed from LCS, you’re using the legacy pipeline. You will have to manually update your build pipeline tasks to use the new version. The steps are fairly simple and outlined in this official docs article. I have a quick video on YouTube to walk you through this as well. There is one little flag that could trip you up, however.

  Read more...

Jan 18, 2021 - Including ISV Binaries in Your Package

Filed under: #daxmusings #bizapps

Many ISVs supply their Dynamics 365 Finance / Supply Chain solutions in a deployable package, which only contains binaries. With the current enforcement (“all-in-one package”) of a long-standing best practice to deploy all code together all the time, some customers are only now faced with figuring out how to “repackage” an ISV’s binaries into their own package. In this post I will outline a few gotchas in addition to the official documentation, for both the legacy build pipeline and the new build pipeline. You can also watch a quick overview video I made here on YouTube.

  Read more...

Dec 17, 2020 - The Making Of

Filed under: #tech

Welcome to my new site. I’ve been wanting to blog more, but include topics unrelated to Microsoft Dynamics. I wanted a place to put some of the game development stuff I do. And as I’m considering to get into some regular streaming, I want a landing place for anyone checking me out. So, here we are. I started daxmusings.codecrib.com in 2010 on blogspot aka blogger. I attached the custom domain to it at a later time, keeping the daxmusings subdomain. I’ve had stuff on www.codecrib.com on and off, never very interesting. I’ve hosted it in several different ways over the years, most recently as a GitHub Pages site with custom domain attached.

  Read more...

Oct 31, 2019 - Pushing, Dragging or Pulling an Industry Forward

Filed under: #daxmusings #bizapps

Quite a few years ago, in my previous job when I was an MVP still, I did an online webinar for the AXUG called “Putting Software Engineering back in Dynamics AX” (in 2014). Admittedly it was somewhat of a rant / soap box type of talk. I guess “food for thought” would be a more optimistic characterization. I did try to inject some humor into the otherwise ranty slides by adding some graphics. At the time we were building out our X++ development team and we were heavily vested in TFS and automation, and I was very keen on sharing our lightbulb moments and those “why did we only start doing this now” revelations.

Fast forward 5 years to a new generation of technology and a shift to cloud. In fairness, many more people are engaged in some of these topics now because the product finally has features out of the box to do builds, use source control without tons of headaches and setup, etc. But contrary to the advice on one of the original slides from 2014 that said “Bring software engineering into AX, not the opposite” - it sort of feels that is exactly what has happened. People projecting their AX processes onto some software engineering processes. Sometimes ending up with procedures and technology for the sake of it, and not actually solving any problems at all and sometimes even creating more problems. But, they can say they ticked another checkbox on the list. I have stories of customers with messed up code in production, because someone setup branching because they were told that’s a good thing to have. Yet nobody knew what that meant or how to use it. So code was being checked into branches left and right, merged in whichever direction. Chaos. A perfect example of implementing something without having a good reason or understanding to do so. On the flipside, we have customers calling us up because they “redeployed” their dev VM, and want to know how they can get a clean copy of their code from production back into their VM. Now, part of that is legacy thinking and not understanding the technology change. But honestly that was never a good thing in older versions either.

Anyway, that brings us to my topic du jour. As you may or may not have heard and read, we’re working on elevating the developers tools further. We’ll become more standard Visual Studio, standard Azure DevOps. This is all great news as it will allow X++ developers to use more of the existing tools out there that can be used for any standardized .NET languages or tools. The problem is not that we’ll be forcing people to use advanced tools they don’t know how to use. They can still choose to not use source control or build automation. I’m more worried about the people using all these new tools and not understanding them. What if in the future we start supporting Git? Will our support team be overwhelmed with people stuck on branching, merging, PRs, rebasing and all the great but complex features of decentralized source control? We’ve never dealt with situations where we “support” the technology (i.e. the tools are compatible) but we won’t support the user of that technology (sorry your production code got messed up, go get some training on Git branching and good luck to you on recovering your production environment). In the history of our product, we’ve never drawn a big line between technical compatibility but not supporting the usage of it. But we will have to. How about other areas, like PowerBI, PowerApps, etc.? Yes they are supported and will be integrated further, but will Dynamics 365 support answer your usage questions?

I’ve had frank discussions with developers (that I personally know), where I basically tell them “the fact you’re asking me these questions tells me you shouldn’t be doing this”. But that’s not an attitude we can broadly apply to our customer base.

So I ask YOU, dear audience. Where and how can we draw a line of supportability?

  Read more...

Oct 11, 2019 - Debugging woes with symbols: bug or feature?

Filed under: #daxmusings #bizapps

I’ve struggled with this myself for a while during the betas of “AX7”. Sometimes, symbols aren’t loaded for your code and your breakpoints aren’t hit. It’s clear that the Dynamics 365 option “Only load symbols for your solution” has something to do with it, but still there’s strange behavior. It took me a few years at Microsoft for someone to explain the exact logic there. Since I’ve been sitting on this knowledge for a while and I’ve recently ran into some customer calls where debugging trouble was brought up, I realized it’s overdue for me to share this knowledge.

Summary: it’s in fact a feature, not a bug. But I would like to see this behavior changed assuming we don’t introduce performance regressions.

There’s a piece of compiler background information that is not well understood which is actually at the root of this problem. We all know there are two ways to compile your code: from the Dynamics 365 “Full build” menu, or from the project. The project build, if you right-click on your project, has two options: build and rebuild. Now, the “rebuild” feature does NOT do the same thing as the full build menu - and that is the crux of the issue here. Both build and rebuild from the project only compile the objects in your project. Rebuild will force a build of everything in your project but not the whole package it belongs to. To do this, our Visual Studio tools and the compiler make good use of netmodules for .NET assemblies. Think of a netmodule as a sub-assembly of an assembly, I guess.

Now, the point is this. The “load symbols only for your solution” option only loads the symbols of the binaries for the objects in your project - aka the netmodules. So when you do a full build from the Dynamics 365 menu, you actually HAVE NO symbols only for the objects in your project (only the full binary of the package). And as a result after doing a full build and debugging with the “symbols for solution only” option turned on, your breakpoints will NOT be hit due to the symbols not having loaded.

I think we should change this option to work more like “load symbols for the packages containing your solution’s objects” or something to that effect. We’ll have to see if that affects the performance for large packages in a significant way, since it will now load all the symbols for that package. That is ultimately why this feature was introduced (see? it’s a feature!). Worst case we may need a new option so you can use the old behavior or the more inclusive behavior…

I’d love to hear your thoughts on this, here or on Twitter @JorisdG.

  Read more...

Mar 25, 2019 - Repost: Pointing Build Definitions to Specific VMs (agents)

Filed under: #daxmusings #bizapps

Since the AXDEVALM blog has been removed from MSDN, I will repost the agent computer name post here AS-IS, until we can get better official documentation. Original post: October 20, 2017


We’ve recently collaborated with some customers who are upgrading from previous releases of Dynamics 365 to the recent July 2017 application. These customers typically have to support their existing live environment on the older application, but also produce builds on the newer application (with newer platform).

Currently the build agent is not aware of the application version available on the VM. As a result, Visual Studio Team Services (VSTS) will seemingly randomly pick one or the other VM (agent) to run the build on. Obviously this presents a challenge if VSTS compiles your code on the wrong VM - so the wrong version of application and platform. We are reviewing what would be the best way to support version selection, but in the mean time there is an easy way to tie a build definition to a specific VM.

First, in LCS go to your build environment and on the environment details page, find the VM Name of the build machine. In this particular example below, the VM Name is “DevStuffBld-1”.

Next, go to VSTS and find the build definition you wish to change. Note that if you have more than one version you’re building for, you will want more than one build definition - and point each to its respective VM. To make sure a build definition points to a specific VM, edit the build definition and find the Options tab. Under Options you will find a section of parameters called Demands. The demands are effectively either specific values setup on the agent setup in VSTS (you can do this in the Agent Queue settings), and the agent also picks up all environment variables on the VM it runs on. You will notice that all build definitions already check for a variable called DynamicsSDK to be present to ensure the build definition runs only on agents where we have set this “flag” if you will. Since each VM already has an environment variable called COMPUTERNAME, we can add a demand for computername to equal the name of our build VM. So for the example of the build VM from above, we can edit our build definition to add the following demand by clicking +Add:

Save your build definition and from now on your build will always run on the right VM/agent.

  Read more...

Feb 19, 2019 - Repost: Enabling X++ Code Coverage in Visual Studio and Automated Build

Filed under: #daxmusings #bizapps

Since the AXDEVALM blog has been removed from MSDN, I will repost the code coverage blog post here AS-IS (other than wrong capitalization in the XML code), until we can get better official documentation. Note that after this was published, I received a mixed response from developers. For many it worked, for others this did not work at all no matter what they tried… I have not been able to spend more time on investigating why for some people this doesn’t work. Original post: March 28, 2018


To enable code coverage for X++ code in your test automation, a few things have to be setup. Typically, more tweaking is needed since you will likely be using some platform/foundation/appsuite objects and code, and don’t want code coverage to show up for those. Additionally, the X++ compiler generates some extra IL to support certain features, which can be ignored. Unfortunately there is one feature that may throw off your results, we’ll talk about this further down.

One important note: Code Coverage is a feature of Visual Studio Enterprise and is not available in lower SKUs. See this comparison chart under Testing Tools | Code Coverage.

To get started, you can download the sample RunSettings file here: CodeCoverage You will need to update this file to include your own packages (=”modules” in IL terminology). At the top of the file, you will find the following XML:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackageName.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageNameTest*.*</ModulePath>
    </Exclude>
</ModulePaths>

You will need to replace the “MyPackageName” with the name of your package. You can add multiple lines here and use wildcards, of course. You could add Dynamics.AX.* but that would then include any and all packages under test (including Application Suite, for example). This example also shows how to exclude a package explicitly, for example in this case the test package itself. If you have multiple packages to exclude and include, you would enter it this way:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackage1.*</ModulePath>
        <ModulePath>.*MyPackage2.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageName1Test*.*</ModulePath>
        <ModulePath>.*MyPackageName2Test*.*</ModulePath>
    </Exclude>
</ModulePaths>

To enable code coverage in Visual Studio, open the Test menu, select Test Settings and Select Test Settings File. Select your settings file. You can then run code coverage from menu Test > Analyze Code Coverage and then selecting All Tests or Selected Tests (this is your selection in the Test Explorer window). You can open the code coverage results and double click any of the lines - which will open the code and highlight the coverage.

To enable code coverage in the automated build, edit your build definition. Click on the Execute Tests task, and find the Run Settings File parameter. If you have a generic run settings file, you can place it in the C:\DynamicsSDK folder on the build VM, and point to it here (full path). Optionally, if you have a settings file specific for certain packages or build definitions, you can be more flexible here. For example, if the run settings file is in source control in the Metadata folder, you can point this argument to “$(Build.SourcesDirectory)\Metadata\MySettings.runsettings”.

The biggest issue with this is the extra IL code that our compiler generates, namely the pre- and post-handler code that is generated. This is placed inside any method, and is thus evaluated by code coverage even though your X++ source doesn’t contain this code. As such most methods will never get 100% coverage. If a method has the [Hookable(false)] attribute (which makes the X++ compiler not add the extra IL code), or if the method actually has pre/post handlers, the coverage will be fine. Note that Chain-of-Command logic that the compiler generates is nicely filtered out.

  Read more...

Jan 18, 2019 - Azure DevOps Release Pipeline

Filed under: #daxmusings #bizapps

Welcome to 2019, the year of the X++ developer!

Today marks a great day with a release of the first Azure DevOps task for D365 FinOps users. Since documentation is still underway, I wanted to append the official blog post with some additional info to help guide you through the setup. The extension can be installed from here: https://marketplace.visualstudio.com/items?itemName=Dyn365FinOps.dynamics365-finops-tools

The LCS Connection

  • if your LCS project is hosted in the EU, you will need to change the “Lifecycle Services API Endpoint”. By default it points to https://lcsapi.lcs.dynamics.com but if you log into LCS and your URL for your project shows “https://eu.lcs.dynamics.com” you will need to change this api URL to also include EU, like so: https://lcsapi.eu.lcs.dynamics.com
  • App registration: I encourage to use the preview setup experience (“App registrations (Preview)”). Add a “new registration” for a native application, I selected “accounts in this organizational directory only (MYAAD)”. In the redirect URI you can put anything for a native application, typically http://localhost and in the preview experience use “Public client (mobile & desktop)” to indicate this is a native application.

Thanks to Marco Scotoni for pointing out that finding the API to give permissions to, just go to the “APIs my organization uses” tab.

The Task

  • Create the new connection using the app registration as described above
  • LCS Project Id is the “number” of your project. You can see this in the URL when you go to your project on the LCS website, for example https://lcs.dynamics.com/V2/ProjectDashboard/1234567. I’m hoping this can eventually be made into a dropdown selection.
  • File to upload… The build currently produces a ZIP file with a name that contains the actual build number, and that is not configurable there (you’d have to edit powershell for that). So until that is changed, there’s actually an easy way to fix that. Since your release pipeline has the build pipeline’s output as an artifact, you can actually grab the build’s build number. So, use the BROWSE button to select the build drop artifact, but then replace the build number with the $(Build.BuildNumber) variable. For example, on my test project this resulted in the following file path: $(System.DefaultWorkingDirectory)/BuildDev/Packages/AXDeployableRuntime_7.0.4641.16233_$(Build.BuildNumber).zip If your AX build is not your primary artifact, you can use the artifact alias, like $(Build.MyAlias.BuildNumber). You can find this into in the release pipeline variables documentation.
  • LCS Asset Name and Description are optional, but I would recommend setting at least the name. For example, I set the following: LCS Asset Name: $(Release.ReleaseName) LCS Asset Description: Uploaded from Azure DevOps from build $(Build.BuildNumber)
  • If using a hosted agent, make sure to use the latest host (“Host VS2017”).

Happy uploading!!

  Read more...

 

Page: 2 of 15

Blog Links

Blog Post Collections

Recent Posts