Increasing Visibility to Ivanti Application Control Events with Xtraction


Ivanti’s Application Control has great built-in auditing features that provide insight to actions controlled within Application Control. Although historical auditing is useful, sometimes it can become overwhelming and noisy.

Common Auditing Events:

  • Applications allowed/denied execution
  • Applications running under elevated privileges
  • Self-elevation of applications to run as Administrator
  • Policy change requests

It is key to be able to separate the actionable events from the informational events and be able to present this information in a visible and readable format. Depending on the size of the environment and the number of devices reporting information, the sheer amount of data can become overwhelming.

Ivanti’s Xtraction is a powerful dashboard reporting tool that produces charts and tables in an organized format for better consumption. Xtraction can integrate with a plethora of products, including Application Control, to produce just about any imaginable report.

How Application Control Auditing works out-of-box

Application Control utilizes a configuration deployed on endpoints that determines what programs, websites, and actions a user can and cannot access. Each of these access controls, whether it is an allow or deny, the result can be audited to help refine policy and configuration. There are a number of defined audited events that can be enabled depending on the information that needs to be captured; some events produce more traffic than others, so be careful what is being captured and how long the events are retained.

Trusted ownership is a large part of Application Control. Trusted ownership only allows apps that were introduced by trusted administrators; the list of trusted administrators can be modified to suit any environment. Trusted ownership helps prevent unwarranted and unwanted execution of code, whether it’s good or bad. This code could be introduced into the environment from software a user downloaded or via other means.

Figure 1 – Denied Execution Template 

Upon execution, since the software was not downloaded by a trusted owner, or explicitly defined in the policy, they will get an execution denied prompt; as seen in Figure 1. This can be leveraged with auditing to know exactly who tried to execute untrusted software and what they were trying to execute.

Xtraction Integration with Application Control

Xtraction is a reporting software that uses Data Sources to communicate with databases for information extraction. Each Data Source establishes its own database connection which allows for individual, or compound reporting.

Xtraction uses Dashboards to present information in a clean format and utilizes graphs and charts depending on business needs; Xtraction can also create Documents and Reports.

Dashboard features:

  • Ability to customize components/multiple datasets into charts, graphs, or lists
  • Drill down for more in-depth data visibility
  • Filter based on specific criteria
  • View real-time or historical data
  • Generate and schedule reports for email delivery

Figure 2 – Event Monitor

All of these mechanisms can be used together to have a true understanding of the environment.

Application Control auditing is an important part of Application Control. Each audited event is useful for tweaking the configuration, for example, if there is a need to allow or deny a new item. Auditing helps to gain insight into the actions being performed on an endpoint within an environment.

Xtraction can be used to report on the auditing produced by Application Control, this can be coupled with a number of different charts or graphs depending on the need; figure 2 shows an example of a Dashboard produced from Xtraction for Application Control auditing events.

Figure 2 uses the following components and features to quickly display data for Application Control events:

  • Pivot Charts
    • Displays filtered event numbers compared with event description and user
  • Time Chart
    • Displays the number of events within the past week
  • Filters for specific event numbers that pertain to Application Control events

For optimal reporting, this Dashboard could be scheduled and sent out via email weekly to stay up to date on the events being produced by Application Control.


After a brief overview of Xtraction and Application Control, hopefully there is a better understanding of how they can be used together and the benefits they provide. Application Control is a very useful security tool that provides powerful auditing capabilities.

Leveraging Xtraction, the audited events can be utilized to produce customizable Dashboards in an organized format that will help you refine Application Control policies to create a better user experience. Each created Dashboard can be saved for reuse, sent out regularly via email, or customized at any time if the information needs to be changed.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

Tips and Tricks When Using Ivanti Environment Manager’s “GeoSync”

A while back, you may recall that we published a blog about GeoSync when it was still a fairly new feature within the Ivanti User Workspace Manager (UWM) Suite.

In case you didn’t see it, here is the link Since we published that blog, GeoSync has certainly improved and, when implemented correctly, can do a great job keeping the Personalization database synchronized between multiple datacenters.

The challenge with GeoSync is in getting it set up “correctly”… and no, unfortunately it’s not always as simple as following the instructions. For many customers, the out-of-the-box configuration settings within GeoSync don’t get the job done.

Before we get into that, let me address another challenge you might encounter when setting up GeoSync. The “ConfigureGeoSync.ps1” PowerShell script included with the UWM install does a great job enabling GeoSync for you when all of your Subscribers are built and ready to be synchronized.

But… what happens when you need to set up Subscriber#1 today but cannot add Subscriber#2 until next week?  Well, the “ConfigureGeoSync.ps1” script isn’t going to help you ADD a subscriber and, in fact, will complain that the Publisher already exists and will exit on you.

I’m a little surprised that Ivanti didn’t just add a second script to deal with adding Subscribers, but it turns out to be rather easy once I dug through the GeoSync PowerShell cmdlets.

Here’s the procedure for adding a Subscriber to an EM environment that already has one Publisher and one Subscriber:

  1. Go to a Personalization Server that is connected to the PUBLISHER database, log in and run PowerShell as an Administrator.
  2. Type “Import-Module AppSenseInstances” and press enter.

3. Type “get-ApsInstance” and press enter.

  • Copy the InstanceID from the output for Personalization Server (usually the lower one).

  1. Type “Import-ApsInstanceModule -InstanceId <paste in the id you just copied>” then press Enter.

5. Now your Personalization Server Instance is loaded so you can start the commands to add a GeoSync Subscriber.

  • Type “Add-EMPSSubscriber” and press enter.
  • You will be prompted to enter the SubscriberServer. (This is the database server you’d like to add to GeoSync). (UWM03 in the screen shot below)
  • Next will be the database name on the Subscriber database server. This is whatever name you created for the Personalization DB when you did the install. In this case, we are using the default “PersonalizationServer”.
  • When you press enter, you may be prompted for credentials. This will need to be an account that has at least dbowner permissions on the Personalization DB.
  • The PublisherServer is now requested. (SQL01 in the screen shot below)
  • Finally, the PublisherDatabase name is needed. “PersonalizationServer” in this case.
  • Press Enter.

  1. Now let’s check to see if the new Subscriber is there.
  • Type Get-EMPSSubscribers and press enter.
  • Enter the PublisherServer (in this case SQL01) and press enter.
  • Enter the PublisherDatabase (in this case PersonalizationServer) and press enter.
  • You should now see the new Subscriber listed.

  1. Open the EM Console on the Publisher server.
  • Go to Manage > GeoSync

  1. You should see the new Subscriber now in the window as “Unassigned”. You should now be able to assign the new Subscriber to a Personalization Group and set a Sync schedule.

Not a terribly painful process, but certainly more complex than using the PS1 script.

Here’s where we get back to the “tuning” needed to make GeoSync work for some customers. If the bandwidth between the Publisher and the Subscribers is robust and the network latency is nice and low, the out-of-box configuration may be fine for you.

When conditions are not optimal between the databases, you may run into a recurring problem where Synchronization Status will show as “Synchronization Incomplete”.

When this problem rears up, it usually seems to stem from the timeout values within the “BackgroundService.config” (typically located in C:\Program Files\AppSense\Environment Manager\Personalization Server\BackgroundService).

If you are getting incomplete synchronizations, one of the first thing Ivanti Support tries is to double some of the values in that “BackgroundService.config” file. Here are the ones you’ll need to modify (should be line 162 to 172 in the file):

<add key=”GeoSyncAgentRetryCount” value=”4” />

<!– Wait before retrying deadlock –>

<add key=”GeoSyncAgentRetryWaitMs” value=”600” />

<!– Wait before retrying non-deadlock –>

<add key=”GeoSyncAgentNonDeadlockRetryWaitMs” value=”10000” />

<!– Execution timeout for geo sql commands –>

<add key=”GeoSyncAgentDefaultCommandTimeoutSecs” value=”180” />

<!– Reconnect tries after connection lost–>

<add key=”GeoSyncAgentReconnectTries” value=”6” />

<!– Delay before attempting reconnect –>

<add key=”GeoSyncAgentReconnectDelayMs” value=”10000” />

Once you’ve finished modifying the file with the new values, you will need to restart the Ivanti Background Service on the UWM Server to get the changes into use.

Next step is to force a sync and see what happens. Go back to your EM Console (on a Publisher) and go to Manage > GeoSync. Then click the little ellipsis on the right of one of the Subscribers and select Synchronize.

If after some time, you get “Synchronization Complete”, there’s a good chance you’ve got it working. You’ll want to check each Subscriber to make sure the settings worked for all of them.

If you still get “Synchronization Incomplete” for any of your Subscribers, you can try increasing the values even more. There is also another modification you can try (again a modification of the “BackgroundService.config” file).

This change is detailed by Ivanti here:

Bonus tip: Sometimes, when you have a new, Uninitialized Subscriber, you may get a failure on that initial Synchronization. Try doing a “Configuration-only synchronization” first and THEN doing a full Synchronization and see if that solves the problem.

Ed Webster
Ivanti Practice Lead
Critical Design Associates

LinkedIn Profile

Automating Lab Buildouts with XenServer PowerShell – Part 4 Roles, Features, and Other Components

Part 1 – Understanding the Requirements
Part 2 – Creating a Custom ISO
Part 3 – Unlimited VM Creation
>>Part 4 – Roles, Features, and Other Components


Creating an automated lab has its benefits, but what about the additional configuration of roles and features after all the servers are built? Building out all of these components can take some time, time that you may not have.

For this reason, AXL has the functionality to add a small subset of additional roles and features to any server that was created. The current roles and features that can be installed and configured with AXL include, Active Directory Domain Services (AD DS), Active Directory Certificate Services (AD CS), and Distributed File System (DFS).

It’s important to note that you can only configure these additional roles and features if the custom ISO you created in part 2 has XenServer Tools in it. If you did not select to put XenServer Tools in the ISO, there will be no way to grab the servers IP address after installation.

Upon completion of server creation, you will be prompted whether or not you want to configure additional roles and features. Upon selecting yes, you will get a prompt as show in Figure 1. If you choose to install any of the additional roles and features, the only requirement is AD DS, as noted by it being automatically checked and grayed out via the component selection form, everything else is optional.

Each of the other roles and features require the server to be part of a domain, which is why AD DS is a requirement. The total additional time of completion will depend on the selected roles and features, each one will take a varying amount of time depending on how large the buildout is.

Figure 1 – Component

AD DS Buildout

Upon selecting to build out additional roles and features, you are required to configure AD DS. The complete configuration includes a mandatory AD DS configuration and an optional User, Group, and OU configuration. I should note that at any time during the configuration of any form you wish to go back and reconfigure something, you can do so by selecting the previous button, if present.

The configuration for AD DS is a lot like the normal configuration you would go through if you were doing it directly on the server, however, there are some other configurations that go along with this form that you would normally do prior to domain creation; notably the IP configuration, as seen in Figure 2. Starting at the top, you will need to configure the local administrator username and password (configured when making the custom ISO), domain name, and safe mode password.

In the next section, you will notice a large list box on the left with all the servers you created in the previous form. Each server will need to be configured with an IP, default gateway, subnet mask, and DNS server(s) and can be done by selecting each server individually from the listbox; the DNS server configuration is important when joining a server to the domain, you will want at least one domain controller IP in the DNS server configuration for proper functionality. As you fill in each of the text boxes for each server, an array will simultaneously be filled in with the information input to allow complete control over the configuration.

Figure 2 – Domain Buildout

Below, you will see a code snippet on how the IP configurations are actually changed.

Function ChangeIPAddresses {
    foreach($XenVMServer in ($Global:AllCreatedServers | sort)) {
    #Define necessary parameters for IP configuration
    $ConnectionPassword = ConvertTo-SecureString -AsPlainText -Force -String $LocalPasswordTextBox.Text
    $ConnectionCreds = New-Object -typename System.Management.Automation.PSCredential -ArgumentList "$($Global:OldIPAddresses[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)])\$($LocalUsernameTextBox.Text)",$ConnectionPassword
    $NewIPAddress = $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]
    $PrefixLength = Convert-IpAddressToMaskLength $Global:SubnetMasks[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]
    $DefaultGateway = $Global:DefaultGateways[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]
    $DNSServers = "$($Global:PrimaryDNSServers[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]),$($Global:SecondaryDNSServers[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)])"
        Invoke-Command -ComputerName $Global:OldIPAddresses[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)] -credential $ConnectionCreds -ScriptBlock {
            param ($NewIPAddress, $PrefixLength, $DefaultGateway, $DNSServers)
            #Define the original IP address
            $OriginalIPAddress = ((Get-NetIPConfiguration).IPv4Address).IPAddress
            #Set the DNS Servers
            Set-DnsClientServerAddress -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -ServerAddresses $DNSServers
            #Disable IPv6
            Disable-NetAdapterBinding -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -ComponentID ms_tcpip6
            #Set the new IP address with the IP, Subnet Mask, and Default Gateway
            New-NetIPAddress -IPAddress $NewIPAddress -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -PrefixLength $PrefixLength -DefaultGateway $DefaultGateway
                #Remove the old IP configuration only if the new and old IPs don't match
                if((((Get-NetIPConfiguration).IPv4Address).IPAddress | where {$_ -match $OriginalIPAddress}) -and ($NewIPAddress -NotMatch $OriginalIPAddress)) {
                Remove-NetIPAddress -IPAddress (((Get-NetIPConfiguration).IPv4Address).IPAddress | where {$_ -match $OriginalIPAddress}) -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -Confirm:$False
        } -ArgumentList $NewIPAddress, $PrefixLength, $DefaultGateway, $DNSServers -AsJob
    WaitScript 2

After all the aforementioned information is filled in, the next thing to configure would be which servers you want to make Domain Controllers. There must be at least one domain controller, if multiples are selected, you can choose which one will be the primary Domain Controller; the first server selected will automatically become the primary, but this can be changed if desired.

Once everything is configured to your liking, you need to validate the configuration by selecting the validate button. This will verify correct syntax for the domain name, safe mode password, IP schemas, and other minor configurations.

Below is a snippet of code outlining the primary Domain Controller promotion process.

Function PromotePrimaryDomainController {
    foreach($DCServer in ($DomainControllersListBox.Items | where {$_ -match [regex]'\*'})) {
    #Define Domain specific parameters
    $DomainName = $DomainNameTextBox.Text
    $SafeModePassword = ConvertTo-SecureString $SafeModePasswordTextBox.Text -AsPlainText -force
    $ConnectionPassword = ConvertTo-SecureString -AsPlainText -Force -String $LocalPasswordTextBox.Text
    $ConnectionCreds = New-Object -typename System.Management.Automation.PSCredential -argumentlist "$($Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",''))])\$($LocalUsernameTextBox.Text)",$ConnectionPassword
        if($DFSCheckbox.CheckState -eq "Checked") {
            $VMStatusTextBox.AppendText("`r`nInstalling DFSR Components on $($DCServer.Replace("*"," ")) for DFS Buildout")
            $DFSComponents = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",""))] -credential $ConnectionCreds -ScriptBlock {
            #Install DFSR components if DFS was selected during component selection, this is necessary for DFS buildout functionality
            Install-WindowsFeature FS-DFS-Replication -IncludeManagementTools
            } -AsJob
            WaitJob $DFSComponents
        $DCPromotion = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",""))] -credential $ConnectionCreds -ScriptBlock {
        param ($DomainName,$SafeModePassword)
        #Create the AD DS Forest with the paramaeters specified in the AD DS buildout form
        Install-ADDSForest -DomainName $DomainName -SafeModeAdministratorPassword $SafeModePassword -DomainNetBIOSName $DomainName.Remove($DomainName.IndexOf(".")).ToUpper() -SYSVOLPath "C:\Windows\SYSVOL" -LogPath "C:\Windows\NTDS" -DatabasePath "C:\Windows\NTDS" -InstallDNS -Force
        } -ArgumentList $DomainName,$SafeModePassword -AsJob
        WaitJob $DCPromotion
        #If the Domain Controller does not reboot automatically within 15 seconds, reboot the machine
        if(Test-Connection -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",""))] -Count 1 -ErrorAction SilentlyContinue) {
        Invoke-XenVM -Name $DCServer -XenAction CleanReboot 

No matter what was chosen on the initial component selection screen, after selecting next on the domain buildout form, you will always get the User, Group, OU buildout form if you want to configure any users, groups, or OUs for your environment.

This form is 100% optional and does not require any of the fields to be filled out. If you do not want to configure any users, groups, or OUs, simply just move onto the next form, if any.

However, if you do choose to fill it out, you will notice three different section, each labeled with their intended purpose. Figure 3 depicts what a filled-out form might look like.

Figure 3 – User Group OU Buildout

Each OU added to the structure can be placed under any OU already created and can be as many levels deep as you wish, though I would not recommend any more than 10 levels for any Active Directory structure. For the Users and Groups, you can input the required information and select add, which will add it to the respective list box.

You will notice there is no validate button for this form, that is because the validation is done before any item is added to a list box. This configuration provides the flexibility to allow you to configure any combination of users, groups, OU’s, or none at all.

AD Certificate Services Buildout

Figure 4 – AD CS Buildout

The next form, if this role was chosen from the form in Figure 1, is AD CS. With this form, seen in Figure 4, you have the ability to completely configure a normal AD CS buildout, as well as AD CS Web Enrollment and OCSP Responder.

Each server added to the list box will need to be configured independently, which can be done by selecting each server from the list box and configuring the required fields.

Each field is entirely separate for each server, meaning you can do different configurations for each one, depending on the CA type chosen. Each Server in the list box can either be a root CA or subordinate CA. If you choose to create a subordinate CA, you will have a more limited selection of fields available compared to a root CA configuration.

This is because the subordinate CA gets all of its configuration from the root CA. Below is a snippet of code that is used to promote the specified CAs.

Function InstallAllServices {
$NonSubordinates = @()
$Subordinates = @()
$AllCAServers = @()
    #Fill arrays with Specified Certificate Authorities
    foreach($CAServer in $CertificateAuthoritiesListBox.Items){
        if($Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -notmatch "Subordinate") {
        $NonSubordinates += $CAServer
        else {
        $Subordinates += $CAServer
    #Fill primary array starting with all non-subordinate CAs
    foreach($NonSubordinate in $NonSubordinates) {
    $AllCAServers += $NonSubordinate
    #Next, fill primary array with all subordinate CAs
    foreach($Subordinate in $Subordinates) {
    $AllCAServers += $Subordinate
    foreach($CAServer in $AllCAServers){
    #Define necessary connection parameters
    $DomainName = $DomainNameTextBox.Text
    $ConnectionPassword = convertto-securestring -AsPlainText -Force -String $LocalPasswordTextBox.Text
    $DomainAdminCreds = new-object -typename System.Management.Automation.PSCredential -argumentlist "$($DomainName.Remove($DomainName.IndexOf(".")).ToUpper())\Administrator",$ConnectionPassword
        #If the server is not a subordinate CA, define all parameters
        if($Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -notmatch "Subordinate") {
            $RootCA = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
            param ($CAType, $CAName, $HashAlgorithm, $KeyLength, $CryptoProvider, $ValidityPeriod, $ValidityPeriodUnits, $DomainAdminCreds, $DomainName)
            Install-AdcsCertificationAuthority -CAType $CAType -CACommonName $CAName -HashAlgorithmName $HashAlgorithm -KeyLength $KeyLength  -CryptoProviderName $CryptoProvider -ValidityPeriod $ValidityPeriod -ValidityPeriodUnits $ValidityPeriodUnits -Credential $DomainAdminCreds -Confirm:$False
            } -ArgumentList $Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CANames[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAHashAlgorithm[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAKeyLength[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CACryptoProvider[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAValidityPeriod[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAValidityPeriodUnits[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $DomainAdminCreds, $DomainName -AsJob
            WaitJob $RootCA
        #Else, only create a CA using the parent specified and a few other parameters
        else {
            $SubordinateCA = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
            param ($CAType, $CAName, $ParentCAName, $ParentCA, $DomainAdminCreds, $DomainName)
            Install-AdcsCertificationAuthority -CAType $CAType -ParentCA "$ParentCA.$DomainName\$ParentCAName" -CACommonName $CAName -Credential $DomainAdminCreds -Confirm:$False
            } -ArgumentList $Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CANames[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CANames[$CertificateAuthoritiesListBox.Items.IndexOf($Global:ParentCA[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)])], $Global:ParentCA[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $DomainAdminCreds, $DomainName -AsJob
            WaitJob $SubordinateCA
        #If the server was chosen as a web enrollment server, install the role
        if($Global:CAWebEnrollment[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -eq "Checked") {
            $EnrollmentPromotion = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
            Install-AdcsWebEnrollment -Confirm:$False
            } -AsJob
            WaitJob $EnrollmentPromotion
        #If the server was chosen as an online responder, install the role
        if($Global:CAResponder[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -eq "Checked") {
        $VMStatusTextBox.AppendText("`r`nPromoting $CAServer to an Online Responder")
            $ResponderPromotion = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
            Install-AdcsOnlineResponder -Confirm:$False
            } -AsJob
            WaitJob $ResponderPromotion
    WaitScript 15

Distributed File System Build

The last form, if chosen to configure the component, is the DFS buildout form. This form allows full configuration of a complete DFS structure, including namespaces, replicated folders, and replication groups. DFS allows for replication of folders and folder contents across multiple servers, this configuration will require at least two servers be chosen for proper replication to take place.

Once the DFS servers are chosen, you need to determine what namespaces you want to create, whether you want to have just one namespace, or split it up for a more complex architecture.

Each DFS folder created in the lower section of the form will need to be in a DFS namespace, specified as DFS root in the form. Each server will get a DFSRoots folder created in the root of the C:\ drive, this will house all of the namespaces created.

Furthermore, each folder created will get created in the DFS root specified; for instance, if you created a DFS root called Common and then created a folder named Backups ­and specified Common as the DFS root, a folder would be created as such, C:\DFSRoots\Common\Backups.

There is an optional parameter for the DFS folder, being the target path. The target path specifies where the DFS folder will point to, if a folder is not specified, it will use the default location in DFSRoots. Using the example before, if you specified a target path of C:\SQL Backups, instead of the DFS folder Backups pointing to C:\DFSRoots\Common\Backups, it gets redirected to C:\SQL Backups when pathing out to the folder.

If you are unfamiliar with DFS, all of these folders live under \\\\. This structure allows for seamless, highly available, and redundant file and folder access, even if one or more servers are down depending on the size of the infrastructure.

Below is a snippet of code used to create the DFS folders. You may notice there are nested Invoke-Commands used for the DFS buildout, this is because the DFSN and DFSR commands would not work when executed remotely directly on the selected servers.

Function CreateDFSFolders {
#Define necessary connection parameters 
$DomainName = $DomainNameTextBox.Text
$ConnectionPassword = convertto-securestring -AsPlainText -Force -String $LocalPasswordTextBox.Text
$DomainAdminCreds = new-object -typename System.Management.Automation.PSCredential -argumentlist "$($DomainName.Remove($DomainName.IndexOf(".")).ToUpper())\Administrator",$ConnectionPassword
#Define the primary domain controller to execute all the commands on
$PrimaryDC = ($DomainControllersListBox.Items | where { $_ -match [regex]"\*" }).ToString().Replace("*","")
    foreach($DFSFolder in $DFSFoldersListBox.Items){
        #If there was a DFS folder target specified, continue with creating that folder and the folder in C:\DFSRoots\<Namespace>
        if($Global:DFSFolderTarget[$Global:DFSFolders.IndexOf($DFSFolder)] -ne $Null) {
            foreach($DFSServer in $DFSServersListBox.Items) {
            $DFSPath = "\\$DomainName\$($Global:DFSFolderRoot[$Global:DFSFolders.IndexOf($DFSFolder)])\$DFSFolder"
                if($DFSServer -match [regex]'\*') {
                $DFSServer = $DFSServer.Replace("*","")
                $DFSFolderCreation = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DFSServer)] -credential $DomainAdminCreds -ScriptBlock {
                param ($DFSFolder,$DFSRoot)
                #Create new DFS folder and share it
                New-Item -ItemType Directory -Path "C:\DFSRoots\$DFSRoot\" -Name "$DFSFolder" -Force
                New-SmbShare -Path "C:\DFSRoots\$DFSRoot\$DFSFolder" -Name "$DFSRoot\$DFSFolder"
                Grant-SmbShareAccess -Name "$DFSRoot\$DFSFolder" -AccountName "Everyone" -AccessRight Full -Force 
                } -ArgumentList $DFSFolder,$Global:DFSFolderRoot[$Global:DFSFolders.IndexOf($DFSFolder)] -AsJob
                WaitJob $DFSFolderCreation
                WaitScript 5
                $FolderTarget = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DFSServer)] -credential $DomainAdminCreds -ScriptBlock {
                param ($DFSPath,$DFSServer,$DFSFolder,$DomainAdminCreds,$PrimaryDC,$OriginalServer)
                    Invoke-Command -ComputerName $PrimaryDC -credential $DomainAdminCreds -ScriptBlock {
                    param ($DFSPath,$DFSServer,$DFSFolder,$OriginalServer)
                        #If this is the primary DFS server, use the DfsnFolder command, otherwise use DfsnFolderTarget
                        if($OriginalServer -match [regex]"\*") {
                        New-DfsnFolder -Path "$DFSPath" -TargetPath "\\$DFSServer\$DFSFolder"
                        else {
                        New-DfsnFolderTarget -Path "$DFSPath" -TargetPath "\\$DFSServer\$DFSFolder"
                    } -ArgumentList $DFSPath,$DFSServer,$DFSFolder,$OriginalServer
                } -ArgumentList $DFSPath,$DFSServer,$DFSFolder,$DomainAdminCreds,$PrimaryDC,($DFSServersListBox.Items | where {$_ -match $DFSServer}) -AsJob
                WaitJob $FolderTarget
        #Else, just make the new folder in C:\DFSRoots\<Namespace>
        else {
            foreach($DFSServer in $DFSServersListBox.Items) {
                if($DFSServer -match [regex]'\*') {
                $DFSServer = $DFSServer.Replace("*","")
                $StandaloneFolder = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DFSServer)] -credential $DomainAdminCreds -ScriptBlock {
                param ($DFSFolder,$DFSRoot)  
                #Create new DFS folder and share it
                New-Item -ItemType Directory -Path "C:\DFSRoots\$DFSRoot\" -Name $DFSFolder -Force
                New-SmbShare -Path "C:\DFSRoots\$DFSRoot\$DFSFolder" -Name "$DFSFolder"
                Grant-SmbShareAccess -Name "$DFSFolder" -AccountName "Everyone" -AccessRight Full -Force
                } -ArgumentList $DFSFolder,$Global:DFSFolderRoot[$Global:DFSFolders.IndexOf($DFSFolder)] -AsJob
                WaitJob $StandaloneFolder

So, as a workaround, I was able to get it working by doing a nested Invoke-Command to essentially run a remote command inside of a remote command. This allowed me to execute the DFSN commands used below on a domain controller from the selected DFS server, the command sequence is as follows, My PC -> DFS Server -> DC. This was very frustrating to figure out because the initial commands gave a very arbitrary and ambiguous error code, but after rigorous testing was finally able to get it to work.

Figure 5 – DFS Buildout


This concludes all segments in this four-part series. We have discussed everything from items needed to begin creating automated labs, how to create a custom ISO for an automated installation, creating VMs rapidly and seamlessly with either default, custom, or blank templates, and finishing it off with how to configure AD DS, AD CS, and DFS all with the AXL tool.

There is a lot of functionality built into AXL and all of it is available on GitHub; you will want to make sure you download not only the PowerShell script, but all the files as well, refer to Part 1 of the series if you want a better understanding of how the files structure should be setup.

I have really enjoyed creating AXL and hope everyone who uses it finds it to be a time saving and useful tool.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

How to Create an MSI with SCCM Standalone Remote Control Viewer Components

SCCM Remote Control Viewer Standalone

In most large organizations, first tier support, like the Help Desk, will require remote control access to user’s workstations. While Microsoft has provided an excellent tool for full remote viewing and control, the administrative components are only provided by Microsoft as a part of the SCCM administrative console.

Providing Help Desk users with access to the SCCM administrative console is likely overkill and unnecessary given the only component that they may require is remote control.

Since Microsoft does not provide an easily deployable remote control client, this article demonstrates how an administrator can bundle the Configuration Manager remote control components together into a Windows Installer (MSI) application package.

In order to be able to run the Configuration Manager Remote Control, there are three files that must be installed on the workstation:

  • CmRcViewer.exe
  • RdpCoreSccm.dll
  • 00000409\CmRcViewerRes.dll

In addition, if the Remote Control Viewer is going to report back to the SCCM Site Server with which client is remote controlled and by whom, the registry key

Server=”<Site Server Name>”

Server=”<Site Server Name>” must also be included.

While this sounds simple enough, the better approach will be to create an application package that contains these components and deploy it to each workstation that needs it.

What are the challenges of doing this manually?

The biggest issue to just copying the files manually to workstations are maintenance and compatibility.

First, these files will change multiple times a year, always with each major SCCM release and possibly with any SCCM hotfix release.

Second, if the components are copied manually, they may not be compatible if one day, the full SCCM administrative console is deployed to that system in the future.

Building the SCCM Remote Control Viewer Standalone Package

What I’ve provided here is the complete solution to this problem. I wrote a PowerShell script that can be run to automatically create a new MSI with the standalone Remote Control Viewer components.

The PowerShell script requires one command line option pointing to the SCCM Primary Server and uses the exact same components that are used by the native installer. Since the same components are used, installing the SCCM administrative console in the future on the same workstation will not cause any issues.

A new version can be built at any time there is a requisite SCCM upgrade, and each new version will automatically upgrade the previous Remote Control Viewer version. If your organization uses digital signing, there is an additional command line option to digitally sign the code with your organization’s code signing certificate.

To get started, download the file: SCCM_RCV

Extract the contents to your working directory. All required support files are included in the zip file.

What is in the script? How does it work?

The PowerShell script and the supporting binaries were co-developed with Microsoft Premier Field Support. There are some basic functions for updating the MSI PackageCode and multiple MSI Properties. The first script parameter is to define the Site Server. And the second script parameter will define your digital certificate.

The update-PackageCode function will update the PackageCode which is unique to every build.

The update-MSIProperty function will update several MSI Property table entries, when called.

This part of the script will create the build folder and copy the necessary files to that folder. This includes all the required files for the Remote Control Viewer.

This part of the script will build the new RemoteControl.msi. There is a “rebuild check” in the code that ensures the existing version is not accidentally rebuilt.

The script uses msidb to execute table import functions. It also uses two vbScripts, one to update all the file properties and the MsiFileHash Table, and the other to compile the cab file and import the cab as an embedded stream.

This part of the script will copy the completed MSI to the build folder and begin file and folder cleanup.

This last part of the script is optional. It will digitally sign the MSI

Running the script
To run the script, execute it from an elevated PowerShell command prompt.

NOTE: Like any other script, this one can be signed with your own digital signature if necessary

Create an MSI containing the ConfigMgr Remote Control Viewer

.\msi.ps1 -siteserver “”

Create an MSI containing the ConfigMgr Remote Control Viewer that is digitally signed

.\msi.ps1 -siteserver “” -codesigningcert “C:\certs\mycert.p12”

This completes the process. We hope this information makes it useful for you to contruct a Windows Installer (MSI) Application Package that includes Configuration Manager remote control components.

Is your organization looking to upgrade ITSM efforts? Contact us for a 30-minute consultation to anwser your questions.

Mike Doneson
Senior Consultant
Critical Design Associates


Automating Lab Builds with XenServer PowerShell – Part 3 Unlimited VM Creation

Part 1 – Understanding the Requirements
Part 2 – Creating a Custom ISO
>>Part 3 – Unlimited VM Creation
Part 4 – Roles, Features, and Other Components


Once you have all the necessary files and have created a custom ISO to complete an unattended installation of Windows, the next step will be to automate the creation of VMs. Doing this manually can take some time via XenCenter, especially if you are creating a large number of VMs. With the VM Build portion of AXL, you can build all 10 VMs, all with the same configuration, from a single screen. You have options to make VMs either from system sysprepped templates, blank templates, or default templates.

NOTE: If you plan on creating VMs in a pool with more than one XenServer host, there are a few considerations before moving forward which includes the following:

  • Rename each host’s local storage repository to something more identifiable than the default name ‘Local Storage’
  • When connecting to the XenServer host to create the VMs, you will need to connect to the pool master

Creating VMs From Sysprepped Templates

Upon first launch of the VM creation form, you’ll notice everything is disabled except for a few areas. This is because a connection to a XenServer host is required for any of the information fields to be populated.

If the XenServer PowerShell module is not found in any subdirectories, you will get a prompt to input the location of the XenServer PowerShell Module, as seen in Figure 1. The PowerShell Module can also be downloaded from GitHub Here .

Figure 1 – Module Location

The process for creating a sysprepped machine using AXL is quite refined compared to the other processes, especially compared to using default templates. Starting at the top and working down, the Select Storage Location drop-down will be where you select what storage repository the VM(s) will get created on which can be either local or network storage.

This information is pulled directly from the host/pool and does not include ISO repositories. In the Input VM Names text box, you can input each VM you want to create and click Add. Multiple machines can be added by separating each name either by a comma or semi-colon, but choose only one or the other type of delimiter, not both.

Selecting the names and clicking Remove can also remove any names if you decide to do so.

Onto the most important part, the Select a Template option. By default, only non-default templates will be populated in the drop down. After selecting which template to use, that’s all that is needed to create VMs from sysprepped templates. After all the information is input, select the validate button to ensure that the configuration is correct and, if it is, click Create to start building the VMs.

There is one caveat to using a sysprepped template though, and it kind of defeats the whole purpose of AXL. That is, you still have to go through the initial configuration of every VM created, whereas with a custom ISO, you don’t have to.

Creating VMs from a Blank Template

You may be asking what I mean by a blank template, and that simply means a template that has no operating system installed. Only the virtual hardware is configured such as the disk, CPU, RAM, and network. The benefit of using a blank template is that it will save you a little bit of time when running through the configuration.

Figure 2 – Sysprepped Buildout Example

Making VMs from a blank template is very similar to that of making them from sysprepped templates. There is one exception though, you have to actually select the ISO repository where the ISO is located, and which ISO you want to be installed. For the ISO repository, you may notice it is disabled in Figure 2, that is because I only had one ISO repository created for my pool.

However, if you had multiples created, the drop down would be enabled, and you could then choose the appropriate repository. To select an ISO, you first need to specify that you would actually like to insert one by selecting the Insert ISO checkbox. You can then select which ISO you want to be inserted into the created VMs and click validate to check your configuration. So long as everything checks out, you can create the configured VMs.

Figure 3 – Blank Buildout Example

NOTE: You can create multiple types of VMs, for instance Windows 10 and Server 2016, but you will need to do create them separately. After you click create, you will receive a prompt asking if you would like to create more VMs; which is when you would click yes if you wanted to make VMs with different operating systems. Doing this will clear the form completely.

Creating VMs From Default Templates

Non-default templates are great, blank and sysprepped, but you also have the option to create VMs via default templates. There is a little more configuration involved with default templates because you have to specify how much RAM, CPU, and disk you want to provide it as well as what network it should be on.

After you have filled out all required information from the blank template example, the only thing needed is for the Default Templates checkbox to be checked. This will populate the drop down with all default templates and will provide some additional drop downs to configure the resources you will allocate to the VMs.

Each drop down will have information pulled from the host to accurately configure the VMs, so you cannot over commit resources that are not there. RAM and Disk Size will be throttled back depending on how many VMs you have listed in the list box above to create.

Say for instance you listed 5 VMs in the list box and you have 32 GB of RAM on your host, taking 2 GB out for the host, you would have a maximum of 6 GB for each VM.

Figure 4 – Default Buildout Example

How it All Works

Knowing how to use the form is important, but so is knowing what is actually happening in the background. The code snippet below is the function that actually creates the VMs. Depending on what was chosen in the form, VMs will be created from either a blank, sysprepped, or default template.

Because of limitations with cloning VMs via the XenServer PowerShell module, the function will determine if the template chosen is on the same SR where the VMs were specified to be created; this would only affect you if you are using pools.

In the event the VMs being created and the template are on a different SRs, a temporary template will be copied to the SR where the VMs are to be created, then they are cloned from there on forward, if creating more than one VM. The temporary template, if created, will get removed after all VMs have been created.

Function BuildVMs {
# Specify DropDown variables 
$VMNames = $NewVMHostnameListBox.Items
$SourceTemplateName = $DropDownTemplates.SelectedItem
$StorageRepositoryName = $DropDownStorage.SelectedItem
$SelectedNetwork = $DropDownNetwork.SelectedItem
    foreach($VMName in $VMNames){
    # Specify general properties 
    $GetSRProperties = Get-XenSR -Name $StorageRepositoryName
    $GetNetworkProperties = Get-XenNetwork $SelectedNetwork
    $TemplateSRLocation = (Get-XenVM -Name $SourceTemplateName | Select -ExpandProperty VBDs | Get-XenVBD | Select -ExpandProperty VDI | Get-XenVDI | Select -ExpandProperty SR | Get-XenSR).name_label
    $ObjSourceTemplate = Get-XenVM -Name $SourceTemplateName
        if($DefaultTemplateCheckbox.CheckState -eq "Checked") { 
        # Specify required VM properties
        $VMRAM = ($DropDownRAMAmmount.SelectedItem*1GB)
        $DiskSize = ($DropDownDiskSize.SelectedItem*1GB)
        $VMCPU = $DropDownCPUCount.SelectedItem
        # Create new VM from all specified properties
        New-XenVM -NameLabel $VMName -MemoryTarget $VMRAM -MemoryStaticMax $VMRAM -MemoryDynamicMax $VMRAM -MemoryDynamicMin $VMRAM -MemoryStaticMin $VMRAM -VCPUsMax $VMCPU -VCPUsAtStartup $VMCPU -HVMBootPolicy "BIOS order" -HVMBootParams @{ order = "dc" } -HVMShadowMultiplier 1 -UserVersion 1 -ActionsAfterReboot restart -ActionsAfterCrash restart -ReferenceLabel $ObjSourceTemplate.reference_label -HardwarePlatformVersion 2 -Platform @{ "cores-per-socket" = "$VMCPU"; hpet = "true"; pae = "true"; vga = "std"; nx = "true"; viridian_time_ref_count = "true"; apic = "true"; viridian_reference_tsc = "true"; viridian = "true"; acpi = "1" } -OtherConfig @{ base_template_name = $ObjSourceTemplate.reference_label }
        $GetVMProperties = Get-XenVM -Name $VMname
        WaitScript 1
        # Create a new Virtual Disk with the same name as the new VM
        New-XenVDI -NameLabel $VMName -VirtualSize $DiskSize -SR $GetSRProperties -Type user
        WaitScript 4
        # Specify VDI and Network locations
        $NewVDI = Get-XenVDI -Name $VMName
        $VIFDevice = (Get-XenVMProperty -VM $GetVMProperties -XenProperty AllowedVIFDevices)[0]
            if($GetVMProperties -and $NewVDI){
                # Create CD drive for the new VM
                New-XenVBD -VM $GetVMProperties -VDI $null -Type CD -mode RO -Userdevice 3 -Bootable $False -Unpluggable $True -Empty $True
                # Attach previously created hard drive into the new VM
                New-XenVBD -VM $GetVMProperties -VDI $NewVDI -Type Disk -mode RW -Userdevice 0 -Bootable $True -Unpluggable $True
                # Create network interface for the new VM
                New-XenVIF -VM $GetVMProperties -Network $GetNetworkProperties -Device $VIFDevice 
                # Mount previously created hard disk
                Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $VMName  
        if($DefaultTemplateCheckbox.CheckState -eq "Unchecked") {
            if($TemplateSRLocation -match $GetSRProperties.name_label) {
            # Create a clone of the template
            Invoke-XenVM -NewName $VMName -VM $ObjSourceTemplate -XenAction Clone
            # Provision the copy into a VM
            Invoke-XenVM -XenAction Provision -Name $VMName
            WaitScript 1
            # Rename the attached disk to the name of the VM
            Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $VMName
            else {
            # Copy the chosen template to the SR where the VMs are being created
            Invoke-XenVM -NewName "$SourceTemplateName - TEMP" -VM $ObjSourceTemplate  -SR $GetSRProperties -XenAction Copy
            # Specify old and new template names
            $SourceTemplateName = "$SourceTemplateName - TEMP"
            $ObjSourceTemplate = Get-XenVM -Name $SourceTemplateName
            # Clone the template that was just coppied to create the first VM
            Invoke-XenVM -NewName $VMName -VM $ObjSourceTemplate -XenAction Clone
            # Provision the copy into a VM
            Invoke-XenVM -XenAction Provision -Name $VMName
            WaitScript 1
            # Rename the attached disk to the name of the VM
            Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $VMName
            # Rename the temporary templates attached disk name
            $ObjSourceTemplate | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $SourceTemplateName
        if($BlankTemplateCheckbox.CheckState -eq 'Checked' -and $DropDownISOs.SelectedItem) {
        $SelectedBootISO = $DropDownISOs.Text
        # Get the VM, select the CD drive for the VM and attach the ISO
        Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "CD"} | Invoke-XenVBD -XenAction Insert -VDI (Get-XenVDI -Name $SelectedBootISO).opaque_ref
    # Start the created VM to begin installing the attached ISO
    $VM = Get-XenVM -Name $VMName
    Invoke-XenVM -VM $VM -XenAction Start -Async
    $Global:AllCreatedServers += $VMName
    #If a temporary template was created, remove it and the associated disk
    if($SourceTemplateName -match "- TEMP") {
    WaitScript 1
    $ObjSourceTemplate | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Remove-XenVDI
    WaitScript 1
    Remove-XenVM -Name $SourceTemplateName



Figure 5 – VM Creation Process

Figure 5 shows a visual representation of the VM creation process after filling out the form and clicking create.


There is really no limit to creating VMs other than available hardware. With the VM build form, you can rapidly create unlimited number of VMs in just a few minutes. This pairs especially well, and was designed for, the custom ISO created in part 2.

Now that you know all the ways VMs can be created with templates, it’s just a matter of doing it. The blank templates provide the most flexibility as you can do less configuration than default templates while receiving the same results. The blank and default templates used in tandem with the custom ISO provide a seamless way to created VMs.

In Part 4 of this blog post I will be outlining “Roles, Features, and Other Components”.

And don’t forget to check out Part 2 where the ISO creation process will be discussed in further detail.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

Creating Configured Deployment Packages with Ivanti Package Studio

Introduction to Ivanti Package Studio

Ivanti Package Studio is a customized version of the Liquit Setup Commander. This product is specifically designed to take the guesswork out of creating configured deployment packages by leveraging a collection of downloadable source software which has already been verified and reviewed by the software vendor.

This source software collection is referred to as the ‘Setup Store’. Package Studio can then download or create Import Wizards for all of these applications which enables the technician to quickly configure and create packages for most commercial off-the-shelf (COTS) applications.

The ‘Setup Store’ is a repository of Windows applications, similar to any other App Store. From the ‘Setup Store’ link within the application, queries can be sorted, searched and filtered based on Manufacturer Name, Product Name, Version Number, Setup Type, Category, Platform, Filename, Language or Date.

Readily available Windows applications and patches can then be downloaded to a pre-configured directory on a local drive or on a file share. After an installation has been downloaded, a package can easily be created using Packing Studio to be configured for enterprise deployment.

Ivanti states that the ‘Setup Store’ has grown to more than 2500 entries. Every day the repository is updated with the latest versions and releases of a listed application.

Ivanti Package Studio also supports every vendor MSI. If Package Studio does not have the installation in its repository, the tool will quickly auto-generate a new Configuration Wizard.

Configuration Wizards provide options to remove all Desktop and/or Start Menu shortcuts, suppress reboots, disable auto-update mechanisms, include licensing information, include database settings, and configure many other deployment options. These options are stored in a transform file (MST) for the selected MSI.

Configuration Wizard files are automatically downloaded for each application. When selected, the configuration options will be unique for each application.

In this example, the ‘Google Chrome Configuration Wizard’ can configure a myriad of options for the Google Chrome deployment as follows:

After launching Ivanti Package Studio, in the lower pane, navigate to the vendor install that will be configured.

Right click on the Windows Installer for Chrome Enterprise and select “Generate Transform”

The Google Chrome Configuration Wizard will then launch.
On the Options tab, select any options that are required for the configuration.

On the “Homepage preferences” tab, enter the default homepage.

On the “Distribution preferences” tab, select any distribution options that are required for the configuration.

On the “Features” tab, make any required feature changes.
When all options have been selected, click “OK”

A “Save Transform File” dialog box will open, prompting the user to save the MST file to disk. This file, in combination with the corresponding MSI and optional (CAB), will constitute the completed package.

Ivanti Package Studio can be configured to directly connect to Ivanti Endpoint Manager (formerly LANDesk), Microsoft System Center Configuration Manager, Microsoft Deployment Toolkit, and other software distribution tools for automatic creation of a deployable package.

In summary, Ivanti Package Studio can be a significant time saver in deploying many commercially available applications. Thanks to the ‘Setup Store’, it may be the most complete source for creating Windows Installer Transform files.

Mike Doneson
Senior Consultant
Critical Design Associates

Deploying Office 365 using SCCM

The deployment of Office 365 applications (Word, Excel, PowerPoint, Outlook, etc..) just became much easier. Beginning with SCCM version 1702, from the Office 365 Client Management dashboard, the Office 365 application Installer can be automated.

From this console the following can be performed:
• Office 365 installation settings can be configured
• Files from Office Content Delivery Networks (CDNs) can be downloaded
• Office 365 can be deployed as an application

What is the Office 365 Client Installer?

The Office 365 client installer is the SCCM installation wizard for the Office 365 client applications installer. This wizard will automate the deployment of the Office 365 applications to client devices like Windows 10, Windows 8.1 and Windows 7.

The Office 365 Client Installation Wizard

The Office 365 client installation wizard is started from with the SCCM console. Navigate to

\Software Library\Overview\Office 365 Client Management

and click on the title. The Office 365 dashboard will launch. Click on the “+ Office 365 Installer” to launch the Office 365 installation Wizard.

Application Settings: This is the initial window. The wizard will prompt for the Name of the deployment, a Description, and the Content Location.

The Office 365 client installation files will be downloaded to the location specified in the wizard if they do not already exist.

NOTE: In order to proceed, either SCCM must be connected to the Internet or the Office 365 installation must have already been downloaded offline and placed in the selected directory.

Import Client Settings: This window offers a choice to Manually specify the Office 365 client settings or Import Office 365 client settings from a configuration file.
Choosing the Import option will automatically configure all the settings for the Office applications.

A Sample configuration.xml file can be found here: Download

If you choose to manually specify the Office 365 client settings, continue to the Client Products window.

Client Products: In this window, the initial option is to select the Office Suite.

Primarily, there are two office suites available as part of the installation wizard.
• Office 365 ProPlus
• Office 365 Business

NOTE: Microsoft may offer pre-release versions such as Office Professional Plus 2019 in the dropdown. This may also become the standard method of deploying Office in future versions.

Below the Suite dropdown list, a frame is shown where you can select the Office 365 applications installed for this deployment.In the example above, “OneDrive (Groove)” is not selected to be installed since it is obsolete. All other standard applications are selected.

Additional Office Products: There are additional dropdowns for Visio and Project.

For this deployment, Visio Pro for Office 365 and None have been selected. The default options are:
Visio Pro for Office 365
Project Online Desktop Client

NOTE: For those two products, they are licensed based on the associated Office 365 licensing.

Specify Settings for Office 365 Clients

Client Settings: In this window, there are options to specify settings for the Office 365 Clients.

At the top, there is a radio button to select the Architecture which can be either 32-bit or 64-bit.

In the Channel selection dropdown, there are four update channels listed. Recently, these choices have changed.

Currently the choices are:
Monthly Channel (formerly Current Channel)
Monthly Channel (Targeted)
Semi-Annual Channel (Differed Channel)
Semi-Annual Targeted (formerly First Release for Deferred Channel)

Below this is the Version dropdown. This will populate with numerous choices for each channel. Currently, the latest build in the Semi-Annual channel is 1803 Build 9126.2282.

There is an “Add/Remove…” button that is used to select additional languages. The default is English (United States).

At the bottom are options to configure Properties.

The four properties are:
Accept EULA
Pin Icons to the taskbar (Win 7/8.x only)
Shared computer activation

NOTE: Microsoft still recommends the 32-Bit version of Office. More information on why can be found here.

Deploying the Office 365 Client

Deployment: The next window is for deployment. It has a single question, “Do you want to deploy the application now?

If you choose “Yes”, the standard SCCM Deployment scheduling options are built into the wizard. There are windows for General (select the collection), Content (Distribution Points), Deployment Settings (Install, Required, etc.), Scheduling, User Experience, and Alerts.

If you choose “No”, the next window presented will be the Summary.

Clicking next will bring up Progress and ultimately Completion. At this point a new Office 365 application is available and ready to be deployed, or will be deployed on the schedule created in Scheduling.

Office 365 Client Management

After the wizard completes, SCCM will return back to the Office 365 Client Management window. From here, there is a graphical display showing all of the installed versions across the environment.

There are now new options on the right side of the window which include: Create an ADR and Create Client Settings.

This area of SCCM functionality continues to be upgraded and improved with each new release.

In Conclusion

This walkthrough is only the beginning of Office 365 management utilizing SCCM.

Mike Doneson
Senior Consultant
Critical Design Associates

Securing an Existing ADFS Environment with Okta MFA

Since the introduction of Active Directory Federation Services (ADFS) in 2015, companies have been widely adopting the idea of using this technology to leverage claims-based authentication…

Checking System Readiness for the Bromium Platform

The Bromium Platform has several hardware and software requirements to fully function on an endpoint. Since the Bromium Client itself does not check many of these requirements until after installation, its difficult know ahead of time what machines require remediation prior to deployment.

To address this issue, I wrote PowerShell scripts to take an inventory of machines in your environment and compile a report using minimal infrastructure.


The solution is designed to be deployed without depending on an endpoint management or software delivery platform. It does however require a scheduled task to run the Endpoint_CDABromiumReadiness.PS1 on each endpoint and a centralized file share where the script can save the collected inventory data. To summarize, the following components are necessary for this solution to work:

  • File Shares – Location for collected data
  • Scheduled Task – Executes the BromiumReadiness script
  • BromiumReadiness PowerShell script – Collects inventory data from endpoint
  • Compiler script – Aggregates collected data into a readable report

File Shares

The Endpoint_CDABromiumReadiness.PS1 collects inventory data from the endpoint and although it could be stored on the machine itself, it would require a significant amount of overhead to log into each machine and gather the data. To facilitate a simpler method of data collection, the script is designed to write the inventory data to a centralized file share. This file share can be one that already exists in your environment or can be created for the purpose of this solution.

The example that I used to create the file share where the script will store inventory data has these properties:

  • Name of folder: TestShare
  • Name of share: TestShare
  • Share permissions: Allow: Change, Read
  • Folder permissions: Allow: Create files / write data, Create folders / append data

Figure 1 – Share Permissions: Allow: Change, Read

NOTE: The name TestShare is used as an example. A more descriptive name would be preferable

Figure 2 – Folder Permissions: Allow: Create files / write data, Create folders / append data

The other file share will be a network location where the Endpoint_CDABromiumReadiness.PS1 PowerShell script can be stored for execution during the Scheduled Task. This file share can be a Read Only location as the script is only read from this location.

The example that I use for a file share location where I store this script is:


Scheduled Task

Since there is no requirement to use a software delivery platform to deploy the Endpoint_CDABromiumReadiness.PS1, the simplest method for deployment and execution of the script is to use a Scheduled Task. Creating the scheduled task on each workstation would be time consuming and inefficient so the better approach would be to simply create the Scheduled Task through an Active Directory Computer Configuration GPO preference. An existing GPO or a new GPO can be used and needs to be linked to the OU or OUs that contain the workstations in the environment.

To create a Scheduled Task as a GPO preference, open the GPO using the Group Policy Management Console (GPMC) and navigate to:

Computer Configuration > Control Panel Settings > Scheduled Tasks

Figure 3 – GPO Preference – Scheduled Tasks

Right-Click “Scheduled Tasks” and choose New > Scheduled Task (Windows Vista and later)

A New Task (Windows Vista and later) Properties window should appear as follows:

Figure 4 – New Task (Windows Vista and later) Properties

Change the Action dropdown from Update to “Create”

Under the General tab, the following parameters should be entered:

  • Name: Bromium Readiness
  • User Account: NT AUTHORITY\System
  • Security Options: Run whether user is logged on or not
  • Security Options: Run with highest privileges
  • Hidden: Enabled

Figure 5 – General tab

Under the Actions tab, click “New” then in the New Action window, enter the following:



Add Arguments(optional):

-ExecutionPolicy Bypass -Command "& '\\<>\Endpoint_CDABromiumReadiness.ps1' -CopyToLocation '\\dc01\testshare\'"

Figure 6 – New Action window

NOTE: The name of the file server and shares are used as an example. Your UNC path would include the location of the Endpoint_CDABromiumReadiness.PS1 in a central file share and the data collection file share as created above. These UNC paths may not necessarily be the same.

Under the Triggers tab, click “New” then in the New Trigger window define the parameters for when to execute the scheduled task:

Figure 7 – New Trigger window

The script should be run at least once but it would be advised to not run the script continuously as the inventory data should only be necessary to collect information to assess the machine’s readiness to deploy the Bromium Client. It is not designed to be a maintenance task.

When the Scheduled Task executes, the Endpoint_CDABromiumReadiness.PS1 PowerShell script will gather the required information from the endpoints, generate a tsv file, and copy the file to the file share you set after the “CopyToLocation” parameter.

BromiumReadiness Script

This PowerShell script collects the inventory data from the endpoints and is contained here:

Figure 8 – BromiumReadiness script

Compiler Script

This PowerShell script aggregates the inventory data located in the file share from all of the individual tsv files generated from each endpoint into a single file that can be reviewed in Excel.

The Compiler_CDABromiumReadiness.PS1 is contained within the zip file.

Figure 9 – Compiler script

It is preferable to keep the Compiler script in the same file share as the tsv files that are generated so that it can be run as needed.

Figure 10 – Compiler script stored in file share

To execute the compiler script, open Windows PowerShell and run:


A finished report will look like this:

Figure 11 – Finished report

Running Multiple Rounds of Readiness Checks (Optional)

If the Scheduled Task runs multiple times, it will overwrite the inventory data that was previously collected for the endpoints. To prevent the data from getting overwritten, a method to keep previous data collections would be to run multiple rounds of readiness checks. This could also be important in a situation where you need to run the inventory more than once and you expect different results.

To do this, simply add the -ReadinessCheckRound parameter to the end of the execution of the BromiumReadiness script with a number indicating the round. This parameter is set to 1 by default and tags the tsv files. Notice in the image that the number 1 precedes the device name:

Figure 12 – Round number in tsv file name

And here is an example of the command line to use. Change the command line in the scheduled task that was created in the GPO to include the -ReadinessCheckRound parameter.

-ExecutionPolicy Bypass -Command “& ‘\\dc01\ScriptShare\Endpoint_CDABromiumReadiness.ps1’ -CopyToLocation ‘\\dc01\testshare\'” -ReadinessCheckRound 2

When the scheduled task runs again, the Endpoint_CDABromiumReadiness.PS1 script will generate tsv files with the round number preceding the name of the device:

Figure 13 – TSV files with multiple rounds

Add the -ReadinessCheckRound parameter when executing the Compiler script and the new report generated will show only data from that round.

.\Compiler_CDABromiumReadiness.ps1 -ReadinessCheckRound 2

Figure 14 – Compiled report from round


Aman Motazedian
Senior Consultant
Critical Design Associates

LinkedIn Profile

Automating Lab Builds with XenServer PowerShell – Part 1 Understanding the Requirements

>>Part 1 – Understanding the Requirements
Part 2 – Creating a Custom ISO
Part 3 – Unlimited VM Creation
Part 4 – Roles, Features, and Other Components


I was introduced to Citrix products in September of 2017 and have been working with it every day since. It seems that on a weekly basis I am exploring some new technology or testing an idea that requires a new lab component. I needed a way to automate tasks. My first target was the creation of new Virtual Machines (VMs).

The XenServer hypervisor is my preferred platform. While not particularly difficult, creating new VMs requires manual processes and time including considerations for machine setup, initial Windows installation, configuration, and other mediums.

With all of the manual steps required to build a lab multiplied by the number of virtual machines being built repeatedly, I decided to create a tool, called Automated XenServer Labs (“AXL”), that takes the heavy lifting and user interaction involved in creating new VMs out of the equation to allow more time for actual lab work.

AXL leverages the XenServer PowerShell module and allows for a wide range of configurations. The PowerShell module allows for the creation and manipulation of VMs, Pools, Storage, Networks, etc. By utilizing this module, you can obtain greater efficiency and automation when, and if, creating new environments.

You may be wondering at this point how I plan to take away a large part of the user interaction to build a Windows VM, and to you I say, a custom-made ISO. An ISO, for those of you who may not know, is used to install an Operating System (OS).

The custom ISO has an unattended answer file in it, which I will talk about in further parts of this series, that allows for no user interaction during the installation process. AXL automates the creation of the Windows ISO to be used during the unattended OS installation and can also allow you to create an Active Directory domain, set IP addresses and names, and install specific server roles and features.

While AXL will automate the installation and configuration of the VMs, some initial user interaction is required to input how the VMs, ISOs, and Windows Features should be configured.  Given some of this configuration, the overall process will still be much quicker than manually building out all of the VMs.

The following sections will describe the components and infrastructure that are necessary for automating the build of a lab environment using AXL, the files and folders that are required to create the custom ISO, and using it to create a virtually unlimited number of VMs.

Components and Infrastructure
There are multiple items needed for AXL to work, the main items being the infrastructure. The first thing you will need is hardware with XenServer installed since AXL specifically uses the XenServer PowerShell module.

The hardware can be anything from a small form factor Intel NUC, to a custom-built or enterprise-grade server. XenServer can be downloaded for free from Citrix. You will need a My Citrix account to complete the process.

For development, I used the following items, though not all are necessary:

– (3x) Intel NUC Core i7 (XenServer Hosts)
– (1x) Ubiquiti EdgeRouter X
– (1x) HP J9028A ProCurve Switch 1800-24G
– (1x) Endpoint (Can be Windows Server or Windows Desktop OS)

As stated above, not all of these components are necessary since all you really need is a single machine to install XenServer, such as an Intel NUC (or some other server type device) and an endpoint to run AXL from.

NUCs have a very small form factor and are great for lab work, however; RAM and HD/SSD are sold separately, so you will have to account for that in the cost if you plan to use them. I use the switch and router to create different VLANs as needed.

A basic network topology of what I use is shown in Figure 1. I RDP to a VM on one of the NUCs and run AXL.
Explanation of VLAN
Explanation of RDP

The most important thing to note is the endpoint where AXL is running must be able to communicate with the XenServer host(s). This means that proper routing and firewall configurations must be in place prior to using AXL. NOTE: Firewall and routing configurations are outside the scope of this post and will not be covered.

Files and Folder Structure
There are a number of files needed for AXL to function. The files use are posted on GitHub, with the most important being the PowerShell script.

The files needed for the ISO creation process are as follows:

– autounattend.xml
– oscdimg.exe
– Windows ISOs and associated licenses (MSDN, Visual Studio, etc)
– Expanded XenServer Tools (this is optional)

I won’t go in depth on any of these files in this part as they will be discussed in Part 2 of this series.

The only files that are required for the VM creation process is the PowerShell module for XenServer, which is actually a folder. Once you have all the required files and folders, you’ll need to put them either in a folder on the root of the C:\ drive, or in a folder on the Desktop.

I would recommend creating a folder just for the PowerShell script and all the necessary files so they are all easily accessible.

The folder structure I use is as follows:


To recap, you will want to get all the required files and folders for AXL and make a suitable folder structure. With the exception of the Windows and XenServer ISOs, all of the files and folders can be downloaded from GitHub.

You will want at least one endpoint (workstation or server) running AXL and one XenServer to host the VMs. With an understanding of the components, files, and folders, you should now have a solid grasp of the overall requirements for using AXL.

And don’t forget to check out Part 2 where the ISO creation process will be discussed in further detail.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile