Optimizing Windows 10 Upgrades with Ivanti Endpoint Manager (EPM)

Introduction

In a recent customer engagement, the client had requested to upgrade Windows 10 workstations within their environment using Ivanti Endpoint Manager (EPM.)

Ivanti has a recommended method to upgrade Windows 10 workstations to newer versions through their service pack definitions.

The service pack definitions are found in the Patch and Compliance tool and can be used to determine if an endpoint can receive the upgraded version of Windows. The service pack definition defines an ISO for the deployment, which cannot be downloaded via the Patch and Compliance tool.

The ISO must be downloaded separately and renamed to match what is configured in the definition. There are both pros and cons to using the recommended method:

ISO Method

Pros:

  • Easy to deploy
  • Simple configuration

Cons:

  • Space requirements (2x ISO size)
  • Large performance impact
  • Poor end-user awareness

When deploying any patch or distribution package, it is important to do so consistently each time to achieve expected results.

For this reason, I developed a Software Distribution method that would offer versatility and consistency with any Windows 10 upgrade. There are pros and cons to this method as well:

Software Distribution Method

Pros:

  • Fewer space requirements (1x ISO size)
  • Full end-user awareness
  • No performance impact

Cons:

  • More involved configuration
  • Leaves machine unusable for the duration of the deployment

Deploying Windows 10 Upgrades via Patch and Compliance

Ivanti’s recommended method for upgrading Windows 10 is fairly straightforward for the setup and deployment.

After the ISO is named according to what is configured in the definition file, all that is left to do is deploy it to targeted endpoints.

The Patch and Compliance deployment, after scheduling the repair and starting the task, is as follows:

  1. Copy the ISO to the machine (download ISO here)
  2. Mount the ISO and extract the contents
  3. Unmount the ISO and start the upgrade process with the now local files

As previously mentioned, Ivanti’s recommended method for deployment has some cons.

First, it is required to have twice the disk space on the endpoint for storing the ISO and the extracted contents; that can easily amount to 8GB or more.

Once the installation starts, a large performance impact will be seen as the upgrade will start using most of the machine’s resources.

Lastly, there is poor end-user awareness as to what is actually happening. EPM does have the capability to provide prompts to the end user with the correct agent settings; however, when using those settings there is still no indication of the progress of the deployment.

Deploying Windows 10 Upgrades via Software Distribution

Ivanti’s Windows 10 upgrade method using Patch and Compliance works, but in this case, the customer needed something that was more user friendly and did not have any impact on performance.

This is how the Software Distribution method ensued. The Software Distribution method makes use of two custom batch files.

The first batch file used in the deployment, in this case, named GetUserName.bat, is used to simply get the username of the currently logged-in user if there is one; the username will be output into a temporary text file called Username.txt.

By default, when creating a distribution package, it will run under the SYSTEM account.

This particular package, however, will run under the current user account; this is important for the next batch file in the process. The contents of the GetUserName.bat file can be seen below.

REM -- If C:\Temp doesn't exist, create it and output the current user to Username.txt
REM -- Since the task is running under the current users context, a file will only get
REM -- created if there is a user logged in

if not exist C:\Temp (
mkdir C:\Temp
echo %username% > C:\Temp\Username.txt
) else (
echo %username% > C:\Temp\Username.txt
)

The second batch file, which will be named Windows10Upgrade.bat, will use the Username.txt output from the previous batch file if it exists.

If the Username.txt file exists, a scheduled task will be created to execute setup.exe that gets copied to the clients.

Setup.exe is the main executable in a Windows ISO that installs and configures the OS with the parameters you define.

The scheduled task will be created to run in the current user’s context with the highest privileges and will execute one minute from the time it is created.

Running the task with the highest privileges is a requirement, otherwise, the scheduled task will fail. The reason a scheduled task is created is to allow the user to see the GUI operation of the upgrade; if setup.exe was executed under the SYSTEM context, the currently logged in user would not see anything.

If there is no Username.txt file, setup.exe will just run under the SYSTEM context as that is the default for the distribution package. The contents of the Windows10Upgrade.bat file can be seen below.

REM -- Set the 'name' variable to whatever is in the text file, if it exists
REM -- This text file only gets created if there is a user currently logged in

set /p name=<C:\Temp\Username.txt

REM -- Get the time in 24 hour format, add one minute, and assign it to the 'hhmm' variable

set hh=%time:~0,2%
set mm=%time:~3,2%
set /A mm=%mm%+1
if %mm% GTR 59 set /A mm=%mm%-60 && set /A hh=%hh%+1
set P=00%mm%
if %mm% LSS 10 set mm=%P:~-2%
if %hh% == 24 set hh=00
if "%hh:~0,1%" == " " set hh=0%hh:~1,1%
set hhmm=%hh%:%mm%

REM -- If the Username.txt exists, that means a user is logged in, so create a scheduled task
REM -- Set the scheduled task to run with the highest privileges and under the currently logged in user
REM -- This will ensure an update prompt is seen by the user during the upgrade
REM -- Otherwise, just run setup.exe as SYSTEM since no user is logged in and Username.txt does not exist

if exist C:\Temp\Username.txt (
schtasks /create /s %computername% /tn "Windows Upgrade" /sc once /tr "%cd%\Setup.exe /Auto Upgrade /Telemetry Disable /ShowOOBE none /DynamicUpdate disable" /st %hhmm% /rl highest /ru %userdomain%\%name%
del C:\Temp\Username.txt
) else (
Setup.exe /Auto Upgrade /Telemetry Disable /ShowOOBE none /DynamicUpdate disable
)

While the batch files, along with the ISO itself, are the main components of this deployment method, below is a list of items and configurations needed for this deployment method:

  • Windows 10 ISO (Extracted to a folder)
  • GetUserName.bat (In the same folder as the Extracted ISO)
  • Windows10Upgrade.bat (In the same folder as the Extracted ISO)
  • IIS MIME type for Default Website
    • Type: application/octet
    • Extension: .

This method allows for a seamless, quick, and efficient deployment that will provide the end-users with a good experience if logged in during the deployment.

If they are logged in, they will have full insight into what is happening. The general process for the entire deployment is as follows:

  • The task starts and either begins the download on the client or starts executing the batch files if already downloaded
    • GetUserName.bat runs and outputs a Username.txt file to C:\Temp that contains the username of the currently logged-in user if there is one. A file does not get created if there is no user logged in.
    • Next, Windows10Upgrade.bat will run and determine if there is a Username.txt file
      • If there is a Username.txt file, a scheduled task will be created for the current user, obtained from the Username.txt file
      • If there is no Username.txt file, setup.exe will run under the SYSTEM context as is the default for the package
    • The machine will transition to a blue screen showing the progress of the installation after about 30-45 seconds and will make the computer unusable for approximately 45min-1.5h; time can also vary depending on hardware capabilities

As you can see, the process is fairly straight forward and if anything gets created, such as the Username.txt file and scheduled task, it will be cleaned up.

To make this process more user friendly, one can also pair this entire deployment with notification messages or deferment timers to provide more control to the end-user.

These are a few examples of the flexibility that EPM offers. Below is a short video of the deployment and demonstration of how it works and is setup.

Ivanti Endpoint Manager (EPM) Demo & Deployment Video

In Conclusion

Thank you for reading and please feel free to reach out if you have questions, comments, or concerns about the information presented in this article.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

Automating Lab Buildouts with XenServer PowerShell – Part 4 Roles, Features, and Other Components

Part 1 – Understanding the Requirements
Part 2 – Creating a Custom ISO
Part 3 – Unlimited VM Creation
>>Part 4 – Roles, Features, and Other Components

Introduction

Creating an automated lab has its benefits, but what about the additional configuration of roles and features after all the servers are built? Building out all of these components can take some time, time that you may not have.

For this reason, AXL has the functionality to add a small subset of additional roles and features to any server that was created. The current roles and features that can be installed and configured with AXL include, Active Directory Domain Services (AD DS), Active Directory Certificate Services (AD CS), and Distributed File System (DFS).

It’s important to note that you can only configure these additional roles and features if the custom ISO you created in part 2 has XenServer Tools in it. If you did not select to put XenServer Tools in the ISO, there will be no way to grab the servers IP address after installation.

Upon completion of server creation, you will be prompted whether or not you want to configure additional roles and features. Upon selecting yes, you will get a prompt as show in Figure 1. If you choose to install any of the additional roles and features, the only requirement is AD DS, as noted by it being automatically checked and grayed out via the component selection form, everything else is optional.

Each of the other roles and features require the server to be part of a domain, which is why AD DS is a requirement. The total additional time of completion will depend on the selected roles and features, each one will take a varying amount of time depending on how large the buildout is.

Figure 1 – Component

AD DS Buildout

Upon selecting to build out additional roles and features, you are required to configure AD DS. The complete configuration includes a mandatory AD DS configuration and an optional User, Group, and OU configuration. I should note that at any time during the configuration of any form you wish to go back and reconfigure something, you can do so by selecting the previous button, if present.

The configuration for AD DS is a lot like the normal configuration you would go through if you were doing it directly on the server, however, there are some other configurations that go along with this form that you would normally do prior to domain creation; notably the IP configuration, as seen in Figure 2. Starting at the top, you will need to configure the local administrator username and password (configured when making the custom ISO), domain name, and safe mode password.

In the next section, you will notice a large list box on the left with all the servers you created in the previous form. Each server will need to be configured with an IP, default gateway, subnet mask, and DNS server(s) and can be done by selecting each server individually from the listbox; the DNS server configuration is important when joining a server to the domain, you will want at least one domain controller IP in the DNS server configuration for proper functionality. As you fill in each of the text boxes for each server, an array will simultaneously be filled in with the information input to allow complete control over the configuration.


Figure 2 – Domain Buildout

Below, you will see a code snippet on how the IP configurations are actually changed.

Function ChangeIPAddresses {
 
    foreach($XenVMServer in ($Global:AllCreatedServers | sort)) {
 
    #Define necessary parameters for IP configuration
    $ConnectionPassword = ConvertTo-SecureString -AsPlainText -Force -String $LocalPasswordTextBox.Text
    $ConnectionCreds = New-Object -typename System.Management.Automation.PSCredential -ArgumentList "$($Global:OldIPAddresses[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)])\$($LocalUsernameTextBox.Text)",$ConnectionPassword
    $NewIPAddress = $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]
    $PrefixLength = Convert-IpAddressToMaskLength $Global:SubnetMasks[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]
    $DefaultGateway = $Global:DefaultGateways[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]
    $DNSServers = "$($Global:PrimaryDNSServers[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)]),$($Global:SecondaryDNSServers[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)])"
 
        Invoke-Command -ComputerName $Global:OldIPAddresses[($Global:AllCreatedServers | sort).IndexOf($XenVMServer)] -credential $ConnectionCreds -ScriptBlock {
 
            param ($NewIPAddress, $PrefixLength, $DefaultGateway, $DNSServers)
 
            #Define the original IP address
            $OriginalIPAddress = ((Get-NetIPConfiguration).IPv4Address).IPAddress
 
            #Set the DNS Servers
            Set-DnsClientServerAddress -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -ServerAddresses $DNSServers
 
            #Disable IPv6
            Disable-NetAdapterBinding -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -ComponentID ms_tcpip6
 
            #Set the new IP address with the IP, Subnet Mask, and Default Gateway
            New-NetIPAddress -IPAddress $NewIPAddress -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -PrefixLength $PrefixLength -DefaultGateway $DefaultGateway
                
                #Remove the old IP configuration only if the new and old IPs don't match
                if((((Get-NetIPConfiguration).IPv4Address).IPAddress | where {$_ -match $OriginalIPAddress}) -and ($NewIPAddress -NotMatch $OriginalIPAddress)) {
 
                Remove-NetIPAddress -IPAddress (((Get-NetIPConfiguration).IPv4Address).IPAddress | where {$_ -match $OriginalIPAddress}) -InterfaceAlias (Get-NetIPConfiguration).InterfaceAlias -Confirm:$False
 
                }
 
        } -ArgumentList $NewIPAddress, $PrefixLength, $DefaultGateway, $DNSServers -AsJob
    
    WaitScript 2
 
    }
 
}

After all the aforementioned information is filled in, the next thing to configure would be which servers you want to make Domain Controllers. There must be at least one domain controller, if multiples are selected, you can choose which one will be the primary Domain Controller; the first server selected will automatically become the primary, but this can be changed if desired.

Once everything is configured to your liking, you need to validate the configuration by selecting the validate button. This will verify correct syntax for the domain name, safe mode password, IP schemas, and other minor configurations.

Below is a snippet of code outlining the primary Domain Controller promotion process.

Function PromotePrimaryDomainController {
 
    foreach($DCServer in ($DomainControllersListBox.Items | where {$_ -match [regex]'\*'})) {
 
    #Define Domain specific parameters
    $DomainName = $DomainNameTextBox.Text
    $SafeModePassword = ConvertTo-SecureString $SafeModePasswordTextBox.Text -AsPlainText -force
    $ConnectionPassword = ConvertTo-SecureString -AsPlainText -Force -String $LocalPasswordTextBox.Text
    $ConnectionCreds = New-Object -typename System.Management.Automation.PSCredential -argumentlist "$($Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",''))])\$($LocalUsernameTextBox.Text)",$ConnectionPassword
 
        if($DFSCheckbox.CheckState -eq "Checked") {
    
            $VMStatusTextBox.AppendText("`r`nInstalling DFSR Components on $($DCServer.Replace("*"," ")) for DFS Buildout")
 
            $DFSComponents = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",""))] -credential $ConnectionCreds -ScriptBlock {
 
            #Install DFSR components if DFS was selected during component selection, this is necessary for DFS buildout functionality
            Install-WindowsFeature FS-DFS-Replication -IncludeManagementTools
 
            } -AsJob
 
            WaitJob $DFSComponents
    
        }
 
        $DCPromotion = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",""))] -credential $ConnectionCreds -ScriptBlock {
 
        param ($DomainName,$SafeModePassword)
 
        #Create the AD DS Forest with the paramaeters specified in the AD DS buildout form
        Install-ADDSForest -DomainName $DomainName -SafeModeAdministratorPassword $SafeModePassword -DomainNetBIOSName $DomainName.Remove($DomainName.IndexOf(".")).ToUpper() -SYSVOLPath "C:\Windows\SYSVOL" -LogPath "C:\Windows\NTDS" -DatabasePath "C:\Windows\NTDS" -InstallDNS -Force
 
        } -ArgumentList $DomainName,$SafeModePassword -AsJob
 
        WaitJob $DCPromotion
       
        #If the Domain Controller does not reboot automatically within 15 seconds, reboot the machine
        if(Test-Connection -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DCServer.Replace("*",""))] -Count 1 -ErrorAction SilentlyContinue) {
 
        Invoke-XenVM -Name $DCServer -XenAction CleanReboot 
 
        }
 
    }
 
} 

No matter what was chosen on the initial component selection screen, after selecting next on the domain buildout form, you will always get the User, Group, OU buildout form if you want to configure any users, groups, or OUs for your environment.

This form is 100% optional and does not require any of the fields to be filled out. If you do not want to configure any users, groups, or OUs, simply just move onto the next form, if any.

However, if you do choose to fill it out, you will notice three different section, each labeled with their intended purpose. Figure 3 depicts what a filled-out form might look like.

Figure 3 – User Group OU Buildout

Each OU added to the structure can be placed under any OU already created and can be as many levels deep as you wish, though I would not recommend any more than 10 levels for any Active Directory structure. For the Users and Groups, you can input the required information and select add, which will add it to the respective list box.

You will notice there is no validate button for this form, that is because the validation is done before any item is added to a list box. This configuration provides the flexibility to allow you to configure any combination of users, groups, OU’s, or none at all.

AD Certificate Services Buildout

Figure 4 – AD CS Buildout

The next form, if this role was chosen from the form in Figure 1, is AD CS. With this form, seen in Figure 4, you have the ability to completely configure a normal AD CS buildout, as well as AD CS Web Enrollment and OCSP Responder.

Each server added to the list box will need to be configured independently, which can be done by selecting each server from the list box and configuring the required fields.

Each field is entirely separate for each server, meaning you can do different configurations for each one, depending on the CA type chosen. Each Server in the list box can either be a root CA or subordinate CA. If you choose to create a subordinate CA, you will have a more limited selection of fields available compared to a root CA configuration.

This is because the subordinate CA gets all of its configuration from the root CA. Below is a snippet of code that is used to promote the specified CAs.

Function InstallAllServices {
 
$NonSubordinates = @()
$Subordinates = @()
$AllCAServers = @()
    
    #Fill arrays with Specified Certificate Authorities
    foreach($CAServer in $CertificateAuthoritiesListBox.Items){
 
        if($Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -notmatch "Subordinate") {
 
        $NonSubordinates += $CAServer
 
        }
 
        else {
        
        $Subordinates += $CAServer
        
        }
 
    }
 
    #Fill primary array starting with all non-subordinate CAs
    foreach($NonSubordinate in $NonSubordinates) {
    
    $AllCAServers += $NonSubordinate
    
    }
 
    #Next, fill primary array with all subordinate CAs
    foreach($Subordinate in $Subordinates) {
    
    $AllCAServers += $Subordinate
    
    }
 
    foreach($CAServer in $AllCAServers){
 
    #Define necessary connection parameters
    $DomainName = $DomainNameTextBox.Text
    $ConnectionPassword = convertto-securestring -AsPlainText -Force -String $LocalPasswordTextBox.Text
    $DomainAdminCreds = new-object -typename System.Management.Automation.PSCredential -argumentlist "$($DomainName.Remove($DomainName.IndexOf(".")).ToUpper())\Administrator",$ConnectionPassword
 
        #If the server is not a subordinate CA, define all parameters
        if($Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -notmatch "Subordinate") {
 
            $RootCA = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
 
            param ($CAType, $CAName, $HashAlgorithm, $KeyLength, $CryptoProvider, $ValidityPeriod, $ValidityPeriodUnits, $DomainAdminCreds, $DomainName)
 
            Install-AdcsCertificationAuthority -CAType $CAType -CACommonName $CAName -HashAlgorithmName $HashAlgorithm -KeyLength $KeyLength  -CryptoProviderName $CryptoProvider -ValidityPeriod $ValidityPeriod -ValidityPeriodUnits $ValidityPeriodUnits -Credential $DomainAdminCreds -Confirm:$False
            
            } -ArgumentList $Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CANames[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAHashAlgorithm[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAKeyLength[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CACryptoProvider[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAValidityPeriod[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CAValidityPeriodUnits[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $DomainAdminCreds, $DomainName -AsJob
 
            WaitJob $RootCA
 
        }
 
        #Else, only create a CA using the parent specified and a few other parameters
        else {
 
            $SubordinateCA = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
 
            param ($CAType, $CAName, $ParentCAName, $ParentCA, $DomainAdminCreds, $DomainName)
 
            Install-AdcsCertificationAuthority -CAType $CAType -ParentCA "$ParentCA.$DomainName\$ParentCAName" -CACommonName $CAName -Credential $DomainAdminCreds -Confirm:$False
 
            } -ArgumentList $Global:CATypes[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CANames[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $Global:CANames[$CertificateAuthoritiesListBox.Items.IndexOf($Global:ParentCA[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)])], $Global:ParentCA[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)], $DomainAdminCreds, $DomainName -AsJob
 
            WaitJob $SubordinateCA
 
        } 
        
        #If the server was chosen as a web enrollment server, install the role
        if($Global:CAWebEnrollment[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -eq "Checked") {
    
            $EnrollmentPromotion = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
 
            Install-AdcsWebEnrollment -Confirm:$False
 
            } -AsJob
 
            WaitJob $EnrollmentPromotion
 
        }
 
        #If the server was chosen as an online responder, install the role
        if($Global:CAResponder[$CertificateAuthoritiesListBox.Items.IndexOf($CAServer)] -eq "Checked") {
 
        $VMStatusTextBox.AppendText("`r`nPromoting $CAServer to an Online Responder")
    
            $ResponderPromotion = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($CAServer)] -credential $DomainAdminCreds -ScriptBlock {
 
            Install-AdcsOnlineResponder -Confirm:$False
 
            } -AsJob
 
            WaitJob $ResponderPromotion
    
        }
 
    WaitScript 15
 
    }
 
}

Distributed File System Build

The last form, if chosen to configure the component, is the DFS buildout form. This form allows full configuration of a complete DFS structure, including namespaces, replicated folders, and replication groups. DFS allows for replication of folders and folder contents across multiple servers, this configuration will require at least two servers be chosen for proper replication to take place.

Once the DFS servers are chosen, you need to determine what namespaces you want to create, whether you want to have just one namespace, or split it up for a more complex architecture.

Each DFS folder created in the lower section of the form will need to be in a DFS namespace, specified as DFS root in the form. Each server will get a DFSRoots folder created in the root of the C:\ drive, this will house all of the namespaces created.

Furthermore, each folder created will get created in the DFS root specified; for instance, if you created a DFS root called Common and then created a folder named Backups ­and specified Common as the DFS root, a folder would be created as such, C:\DFSRoots\Common\Backups.

There is an optional parameter for the DFS folder, being the target path. The target path specifies where the DFS folder will point to, if a folder is not specified, it will use the default location in DFSRoots. Using the example before, if you specified a target path of C:\SQL Backups, instead of the DFS folder Backups pointing to C:\DFSRoots\Common\Backups, it gets redirected to C:\SQL Backups when pathing out to the folder.

If you are unfamiliar with DFS, all of these folders live under \\\\. This structure allows for seamless, highly available, and redundant file and folder access, even if one or more servers are down depending on the size of the infrastructure.

Below is a snippet of code used to create the DFS folders. You may notice there are nested Invoke-Commands used for the DFS buildout, this is because the DFSN and DFSR commands would not work when executed remotely directly on the selected servers.

Function CreateDFSFolders {
 
#Define necessary connection parameters 
$DomainName = $DomainNameTextBox.Text
$ConnectionPassword = convertto-securestring -AsPlainText -Force -String $LocalPasswordTextBox.Text
$DomainAdminCreds = new-object -typename System.Management.Automation.PSCredential -argumentlist "$($DomainName.Remove($DomainName.IndexOf(".")).ToUpper())\Administrator",$ConnectionPassword
 
#Define the primary domain controller to execute all the commands on
$PrimaryDC = ($DomainControllersListBox.Items | where { $_ -match [regex]"\*" }).ToString().Replace("*","")
 
    foreach($DFSFolder in $DFSFoldersListBox.Items){
        
        #If there was a DFS folder target specified, continue with creating that folder and the folder in C:\DFSRoots\<Namespace>
        if($Global:DFSFolderTarget[$Global:DFSFolders.IndexOf($DFSFolder)] -ne $Null) {
 
            foreach($DFSServer in $DFSServersListBox.Items) {
            
            $DFSPath = "\\$DomainName\$($Global:DFSFolderRoot[$Global:DFSFolders.IndexOf($DFSFolder)])\$DFSFolder"
 
                if($DFSServer -match [regex]'\*') {
            
                $DFSServer = $DFSServer.Replace("*","")
            
                }
 
                $DFSFolderCreation = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DFSServer)] -credential $DomainAdminCreds -ScriptBlock {
 
                param ($DFSFolder,$DFSRoot)
 
                #Create new DFS folder and share it
                New-Item -ItemType Directory -Path "C:\DFSRoots\$DFSRoot\" -Name "$DFSFolder" -Force
                New-SmbShare -Path "C:\DFSRoots\$DFSRoot\$DFSFolder" -Name "$DFSRoot\$DFSFolder"
                Grant-SmbShareAccess -Name "$DFSRoot\$DFSFolder" -AccountName "Everyone" -AccessRight Full -Force 
 
                } -ArgumentList $DFSFolder,$Global:DFSFolderRoot[$Global:DFSFolders.IndexOf($DFSFolder)] -AsJob
 
                WaitJob $DFSFolderCreation
 
                WaitScript 5
 
                $FolderTarget = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DFSServer)] -credential $DomainAdminCreds -ScriptBlock {
 
                param ($DFSPath,$DFSServer,$DFSFolder,$DomainAdminCreds,$PrimaryDC,$OriginalServer)
 
                    Invoke-Command -ComputerName $PrimaryDC -credential $DomainAdminCreds -ScriptBlock {
 
                    param ($DFSPath,$DFSServer,$DFSFolder,$OriginalServer)
 
                        #If this is the primary DFS server, use the DfsnFolder command, otherwise use DfsnFolderTarget
                        if($OriginalServer -match [regex]"\*") {
 
                        New-DfsnFolder -Path "$DFSPath" -TargetPath "\\$DFSServer\$DFSFolder"
                
                        }
 
                        else {
                
                        New-DfsnFolderTarget -Path "$DFSPath" -TargetPath "\\$DFSServer\$DFSFolder"
                
                        }
 
                    } -ArgumentList $DFSPath,$DFSServer,$DFSFolder,$OriginalServer
 
                } -ArgumentList $DFSPath,$DFSServer,$DFSFolder,$DomainAdminCreds,$PrimaryDC,($DFSServersListBox.Items | where {$_ -match $DFSServer}) -AsJob
 
                WaitJob $FolderTarget
        
            }
 
        }
 
        #Else, just make the new folder in C:\DFSRoots\<Namespace>
        else {
 
            foreach($DFSServer in $DFSServersListBox.Items) {
 
                if($DFSServer -match [regex]'\*') {
            
                $DFSServer = $DFSServer.Replace("*","")
            
                }
 
                $StandaloneFolder = Invoke-Command -ComputerName $Global:IPAddresses[($Global:AllCreatedServers | sort).IndexOf($DFSServer)] -credential $DomainAdminCreds -ScriptBlock {
 
                param ($DFSFolder,$DFSRoot)  
                
                #Create new DFS folder and share it
                New-Item -ItemType Directory -Path "C:\DFSRoots\$DFSRoot\" -Name $DFSFolder -Force
                New-SmbShare -Path "C:\DFSRoots\$DFSRoot\$DFSFolder" -Name "$DFSFolder"
                Grant-SmbShareAccess -Name "$DFSFolder" -AccountName "Everyone" -AccessRight Full -Force
 
                } -ArgumentList $DFSFolder,$Global:DFSFolderRoot[$Global:DFSFolders.IndexOf($DFSFolder)] -AsJob
 
                WaitJob $StandaloneFolder
            
            }
 
        }
 
    }
 
}

So, as a workaround, I was able to get it working by doing a nested Invoke-Command to essentially run a remote command inside of a remote command. This allowed me to execute the DFSN commands used below on a domain controller from the selected DFS server, the command sequence is as follows, My PC -> DFS Server -> DC. This was very frustrating to figure out because the initial commands gave a very arbitrary and ambiguous error code, but after rigorous testing was finally able to get it to work.

Figure 5 – DFS Buildout

Conclusion

This concludes all segments in this four-part series. We have discussed everything from items needed to begin creating automated labs, how to create a custom ISO for an automated installation, creating VMs rapidly and seamlessly with either default, custom, or blank templates, and finishing it off with how to configure AD DS, AD CS, and DFS all with the AXL tool.

There is a lot of functionality built into AXL and all of it is available on GitHub; you will want to make sure you download not only the PowerShell script, but all the files as well, refer to Part 1 of the series if you want a better understanding of how the files structure should be setup.

I have really enjoyed creating AXL and hope everyone who uses it finds it to be a time saving and useful tool.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

Automating Lab Builds with XenServer PowerShell – Part 3 Unlimited VM Creation

Part 1 – Understanding the Requirements
Part 2 – Creating a Custom ISO
>>Part 3 – Unlimited VM Creation
Part 4 – Roles, Features, and Other Components

Introduction

Once you have all the necessary files and have created a custom ISO to complete an unattended installation of Windows, the next step will be to automate the creation of VMs. Doing this manually can take some time via XenCenter, especially if you are creating a large number of VMs. With the VM Build portion of AXL, you can build all 10 VMs, all with the same configuration, from a single screen. You have options to make VMs either from system sysprepped templates, blank templates, or default templates.

NOTE: If you plan on creating VMs in a pool with more than one XenServer host, there are a few considerations before moving forward which includes the following:

  • Rename each host’s local storage repository to something more identifiable than the default name ‘Local Storage’
  • When connecting to the XenServer host to create the VMs, you will need to connect to the pool master

Creating VMs From Sysprepped Templates

Upon first launch of the VM creation form, you’ll notice everything is disabled except for a few areas. This is because a connection to a XenServer host is required for any of the information fields to be populated.

If the XenServer PowerShell module is not found in any subdirectories, you will get a prompt to input the location of the XenServer PowerShell Module, as seen in Figure 1. The PowerShell Module can also be downloaded from GitHub Here .

Figure 1 – Module Location

The process for creating a sysprepped machine using AXL is quite refined compared to the other processes, especially compared to using default templates. Starting at the top and working down, the Select Storage Location drop-down will be where you select what storage repository the VM(s) will get created on which can be either local or network storage.

This information is pulled directly from the host/pool and does not include ISO repositories. In the Input VM Names text box, you can input each VM you want to create and click Add. Multiple machines can be added by separating each name either by a comma or semi-colon, but choose only one or the other type of delimiter, not both.

Selecting the names and clicking Remove can also remove any names if you decide to do so.

Onto the most important part, the Select a Template option. By default, only non-default templates will be populated in the drop down. After selecting which template to use, that’s all that is needed to create VMs from sysprepped templates. After all the information is input, select the validate button to ensure that the configuration is correct and, if it is, click Create to start building the VMs.

There is one caveat to using a sysprepped template though, and it kind of defeats the whole purpose of AXL. That is, you still have to go through the initial configuration of every VM created, whereas with a custom ISO, you don’t have to.

Creating VMs from a Blank Template

You may be asking what I mean by a blank template, and that simply means a template that has no operating system installed. Only the virtual hardware is configured such as the disk, CPU, RAM, and network. The benefit of using a blank template is that it will save you a little bit of time when running through the configuration.

Figure 2 – Sysprepped Buildout Example

Making VMs from a blank template is very similar to that of making them from sysprepped templates. There is one exception though, you have to actually select the ISO repository where the ISO is located, and which ISO you want to be installed. For the ISO repository, you may notice it is disabled in Figure 2, that is because I only had one ISO repository created for my pool.

However, if you had multiples created, the drop down would be enabled, and you could then choose the appropriate repository. To select an ISO, you first need to specify that you would actually like to insert one by selecting the Insert ISO checkbox. You can then select which ISO you want to be inserted into the created VMs and click validate to check your configuration. So long as everything checks out, you can create the configured VMs.

Figure 3 – Blank Buildout Example

NOTE: You can create multiple types of VMs, for instance Windows 10 and Server 2016, but you will need to do create them separately. After you click create, you will receive a prompt asking if you would like to create more VMs; which is when you would click yes if you wanted to make VMs with different operating systems. Doing this will clear the form completely.

Creating VMs From Default Templates

Non-default templates are great, blank and sysprepped, but you also have the option to create VMs via default templates. There is a little more configuration involved with default templates because you have to specify how much RAM, CPU, and disk you want to provide it as well as what network it should be on.

After you have filled out all required information from the blank template example, the only thing needed is for the Default Templates checkbox to be checked. This will populate the drop down with all default templates and will provide some additional drop downs to configure the resources you will allocate to the VMs.

Each drop down will have information pulled from the host to accurately configure the VMs, so you cannot over commit resources that are not there. RAM and Disk Size will be throttled back depending on how many VMs you have listed in the list box above to create.

Say for instance you listed 5 VMs in the list box and you have 32 GB of RAM on your host, taking 2 GB out for the host, you would have a maximum of 6 GB for each VM.

Figure 4 – Default Buildout Example

How it All Works

Knowing how to use the form is important, but so is knowing what is actually happening in the background. The code snippet below is the function that actually creates the VMs. Depending on what was chosen in the form, VMs will be created from either a blank, sysprepped, or default template.

Because of limitations with cloning VMs via the XenServer PowerShell module, the function will determine if the template chosen is on the same SR where the VMs were specified to be created; this would only affect you if you are using pools.

In the event the VMs being created and the template are on a different SRs, a temporary template will be copied to the SR where the VMs are to be created, then they are cloned from there on forward, if creating more than one VM. The temporary template, if created, will get removed after all VMs have been created.

Function BuildVMs {
 
# Specify DropDown variables 
$VMNames = $NewVMHostnameListBox.Items
$SourceTemplateName = $DropDownTemplates.SelectedItem
$StorageRepositoryName = $DropDownStorage.SelectedItem
$SelectedNetwork = $DropDownNetwork.SelectedItem
 
    foreach($VMName in $VMNames){
 
    # Specify general properties 
    $GetSRProperties = Get-XenSR -Name $StorageRepositoryName
    $GetNetworkProperties = Get-XenNetwork $SelectedNetwork
    $TemplateSRLocation = (Get-XenVM -Name $SourceTemplateName | Select -ExpandProperty VBDs | Get-XenVBD | Select -ExpandProperty VDI | Get-XenVDI | Select -ExpandProperty SR | Get-XenSR).name_label
    $ObjSourceTemplate = Get-XenVM -Name $SourceTemplateName
 
        if($DefaultTemplateCheckbox.CheckState -eq "Checked") { 
 
        # Specify required VM properties
        $VMRAM = ($DropDownRAMAmmount.SelectedItem*1GB)
        $DiskSize = ($DropDownDiskSize.SelectedItem*1GB)
        $VMCPU = $DropDownCPUCount.SelectedItem
        
        # Create new VM from all specified properties
        New-XenVM -NameLabel $VMName -MemoryTarget $VMRAM -MemoryStaticMax $VMRAM -MemoryDynamicMax $VMRAM -MemoryDynamicMin $VMRAM -MemoryStaticMin $VMRAM -VCPUsMax $VMCPU -VCPUsAtStartup $VMCPU -HVMBootPolicy "BIOS order" -HVMBootParams @{ order = "dc" } -HVMShadowMultiplier 1 -UserVersion 1 -ActionsAfterReboot restart -ActionsAfterCrash restart -ReferenceLabel $ObjSourceTemplate.reference_label -HardwarePlatformVersion 2 -Platform @{ "cores-per-socket" = "$VMCPU"; hpet = "true"; pae = "true"; vga = "std"; nx = "true"; viridian_time_ref_count = "true"; apic = "true"; viridian_reference_tsc = "true"; viridian = "true"; acpi = "1" } -OtherConfig @{ base_template_name = $ObjSourceTemplate.reference_label }
        
        $GetVMProperties = Get-XenVM -Name $VMname
        
        WaitScript 1
 
        # Create a new Virtual Disk with the same name as the new VM
        New-XenVDI -NameLabel $VMName -VirtualSize $DiskSize -SR $GetSRProperties -Type user
 
        WaitScript 4
 
        # Specify VDI and Network locations
        $NewVDI = Get-XenVDI -Name $VMName
        $VIFDevice = (Get-XenVMProperty -VM $GetVMProperties -XenProperty AllowedVIFDevices)[0]
 
            if($GetVMProperties -and $NewVDI){
                
                # Create CD drive for the new VM
                New-XenVBD -VM $GetVMProperties -VDI $null -Type CD -mode RO -Userdevice 3 -Bootable $False -Unpluggable $True -Empty $True
                
                # Attach previously created hard drive into the new VM
                New-XenVBD -VM $GetVMProperties -VDI $NewVDI -Type Disk -mode RW -Userdevice 0 -Bootable $True -Unpluggable $True
 
                # Create network interface for the new VM
                New-XenVIF -VM $GetVMProperties -Network $GetNetworkProperties -Device $VIFDevice 
 
                # Mount previously created hard disk
                Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $VMName  
 
            }
            
        }
 
        if($DefaultTemplateCheckbox.CheckState -eq "Unchecked") {
 
            if($TemplateSRLocation -match $GetSRProperties.name_label) {
 
            # Create a clone of the template
            Invoke-XenVM -NewName $VMName -VM $ObjSourceTemplate -XenAction Clone
 
            # Provision the copy into a VM
            Invoke-XenVM -XenAction Provision -Name $VMName
 
            WaitScript 1
 
            # Rename the attached disk to the name of the VM
            Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $VMName
 
            }
 
            else {
            
            # Copy the chosen template to the SR where the VMs are being created
            Invoke-XenVM -NewName "$SourceTemplateName - TEMP" -VM $ObjSourceTemplate  -SR $GetSRProperties -XenAction Copy
 
            # Specify old and new template names
            $SourceTemplateName = "$SourceTemplateName - TEMP"
            $ObjSourceTemplate = Get-XenVM -Name $SourceTemplateName
            
            # Clone the template that was just coppied to create the first VM
            Invoke-XenVM -NewName $VMName -VM $ObjSourceTemplate -XenAction Clone
            
            # Provision the copy into a VM
            Invoke-XenVM -XenAction Provision -Name $VMName
 
            WaitScript 1
 
            # Rename the attached disk to the name of the VM
            Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $VMName
 
            # Rename the temporary templates attached disk name
            $ObjSourceTemplate | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Set-XenVDI -NameLabel $SourceTemplateName
 
            }
        
        }
 
        if($BlankTemplateCheckbox.CheckState -eq 'Checked' -and $DropDownISOs.SelectedItem) {
 
        $SelectedBootISO = $DropDownISOs.Text
        
        # Get the VM, select the CD drive for the VM and attach the ISO
        Get-XenVM -Name $VMName | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "CD"} | Invoke-XenVBD -XenAction Insert -VDI (Get-XenVDI -Name $SelectedBootISO).opaque_ref
 
        }
 
    # Start the created VM to begin installing the attached ISO
    $VM = Get-XenVM -Name $VMName
    Invoke-XenVM -VM $VM -XenAction Start -Async
 
    $Global:AllCreatedServers += $VMName
 
    }
 
    #If a temporary template was created, remove it and the associated disk
    if($SourceTemplateName -match "- TEMP") {
    
    WaitScript 1
    
    $ObjSourceTemplate | Select -ExpandProperty VBDs | Get-XenVBD | where {$_.type -eq "Disk"} | Select -ExpandProperty VDI | Remove-XenVDI
 
    WaitScript 1
 
    Remove-XenVM -Name $SourceTemplateName
 
    }
 
}

 

 

Figure 5 – VM Creation Process

Figure 5 shows a visual representation of the VM creation process after filling out the form and clicking create.

Conclusion

There is really no limit to creating VMs other than available hardware. With the VM build form, you can rapidly create unlimited number of VMs in just a few minutes. This pairs especially well, and was designed for, the custom ISO created in part 2.

Now that you know all the ways VMs can be created with templates, it’s just a matter of doing it. The blank templates provide the most flexibility as you can do less configuration than default templates while receiving the same results. The blank and default templates used in tandem with the custom ISO provide a seamless way to created VMs.

In Part 4 of this blog post I will be outlining “Roles, Features, and Other Components”.

And don’t forget to check out Part 2 where the ISO creation process will be discussed in further detail.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

Automating Lab Builds with XenServer PowerShell – Part 2 Creating a Custom ISO

Part 1 – Understanding the Requirements
>>Part 2 – Creating a Custom ISO
Part 3 – Unlimited VM Creation
Part 4 – Roles, Features, and Other Components

Introduction

After reviewing and staging the required files and folders in Part 1, it’s time to start the custom ISO creation process. The custom ISO is just one part to the whole, but it is probably the most important. Without the custom ISO, you would just have a bunch of bootless VMs.

To build a custom ISO, you need to start with a full Windows operating system ISO which can be downloaded from an MSDN or Visual Studio subscription. Your Microsoft licensing agreement would also include the necessary product keys for the different OS versions.

This base operating system ISO is part of what is used to create the customized, unattended, installation of an OS and will also be used to create the custom ISO. Each of the files and folders (See Part 1) serve their own purpose, which will be discussed in further detail in the following sections along with the ISO creation form itself.

Where the Magic Happens

Upon first launch of AXL, you will get prompted with a choice, (see Figure 1) either to create custom ISO(s), or create VMs.

Figure 1 – First Launch Selection

After selecting to create custom ISO(s), the ISO creation GUI, as shown in Figure 2, will display and is used to define all the information needed to create a custom ISO.

Figure 2 – ISO Creation Form

Responding to each text box and drop down is important to the success of the ISO creation. Below is an explanation of all the fields in the form:

Input ISO Location: This specifies the path where the read-only, base ISO (Windows operating system) is located and will be used for the custom ISO creation.

Input Target Folder Location: This specifies the destination folder. Everything will be copied to the destination folder including the base ISO content, autounattend.xml, and XenServer Tools

Input Autounattended.xml File Location, Input Boot File Location, Input Path to ISO Creation Tool: These three inputs specify the full path to each file and are used in the ISO creation process

Input Path to XenServer Tools contents: This specifies the parent folder that contains the expanded XenServer Tools contents

Input: Product Key, Admin Name, Admin Password, and Time Zone: All of these fields specify information that is required for, and replaced in, the autounattend.xml file

New ISO File Name: This specifies the name of the ISO file you will be creating, NOTE: the file extension is not required in the name

Select Edition: This dropdown is used to specify which Windows edition gets installed and should be used in conjunction with the base ISO selected. One of the check boxes for either Windows 10 or Server 2016 must be checked before this field can be enabled.

After each field is populated, you’ll need to click the Validate button to ensure all the information is entered correctly.

If the validation is successful, the create button is then enabled. Each field has its own validation, most of which use regular expressions (regex) to determine validity. In the example above, clicking “Validate” results in an error as shown in Figure 3. It’s important that each field is populated with the appropriate information.

Figure 3 – Validation Error

For the error shown in Figure 3, the product key text box is evaluated with the following regex:

([A-Za-z0-9\-]{6}){4}[A-Za-z0-9]{5}

The ISO Construction form, when loaded, will pre-populate some fields depending on where the tool is launched from. If the oscdimg.exe and etfsboot.com files are located in subdirectories of where the tool is launched, the paths for them will be automatically prepopulated on launch.

The same goes for the expanded XenServer Tools folder. After marking the check box to have XenServer Tools installed, all subdirectories will be searched, and if the agent is found, the folder path is automatically populated in the text box.

The Custom ISO

After understanding how to use the form and the purpose of the input fields, we can take a look behind the scenes at how the information is used.

The main file that allows for an unattended installation of a Windows operating system is a file called autounattend.xml. This file contains the instructions for how to install the operating system, including the acceptance of the EULA, the product key, the username and password for the administrator account. Different Windows operating systems require different autounattend.xml files since each Windows platform is unique.

I have created sample autounatend.xml files for both Windows Server 2016 and Windows 10 which are both posted on GitHub.

It’s important to note that the name of the file does matter because when loading the ISO, Windows will search specific locations for this file. Check out this Microsoft article for further information.

The next step is to get this information added to a custom ISO. ISOs are read only, so you cannot simply copy a file into it. You can however, copy the contents of an ISO out of it. But then that leads to another problem:

How do I recreate the ISO after I have the contents expanded into a folder?

The answer is actually quite simple: oscdimg.exe. Oscdimg can be found in the Windows Assessment and Deployment Kit (ADK), which are available here, or simply download the files from the provided GitHub page earlier.

The only files from the ADK that are needed for this tool are oscdimg.exe and etfsboot.com. Oscdimg allows the creation of ISOs from the contents of a folder, which is exactly what is needed.

ISO Creation Process Explained

I’ve stepped you through what goes into the ISO and how to appropriately fill out the ISO creation form, but I haven’t really touched on the full process yet.

The ISO creation process is fairly straightforward and Figure 4 provides a visual representation.

Figure 4 – ISO Creation Flowchart

Starting at the top, you’ll need to fill out the ISO Construction form. After all the required fields are complete, the next step will be to click validate, which will then validate each populated field to ensure the ISO creation goes off without a hitch.

Should If a field is not filled in correctly, the tool will indicate which field is wrong and the probable cause. If everything checks out during the validation process, the create button will become enabled. Once you click on “Create”, the process cannot be stopped, so double check each field to ensure accuracy.

After the creation process has begun, the selected base Windows ISO is automatically mounted into the next available drive and the contents will arebe copied into the target folder location. A progress bar will track the progress of the file copies.

Once the copy of the Windows ISO contents is complete, the autounattend.xml file and XenServer Tools, if selected, will get copied to the target folder location as well. The code snippet for the copy process is shown below:

Function CopyFiles { 
 
# Specify the Selected ISO and the mount location
$SelectedISO = $ISOPathTextBox.Text 
$MountedImage =  Mount-DiskImage $SelectedISO -PassThru 
$MountLocation = "$(($MountedImage | Get-Volume).DriveLetter):\" 
$MountedFiles = Get-ChildItem $MountLocation -Recurse 
 
# Specify variables in coordination with the text boxes
$TargetFolder = $TargetFolderTextBox.Text 
$AutounattendXML = $AutounattendPathTextBox.Text 
$AdminPW = $AdminPasswordTextBox.Text 
$AdminAcct = $AdminNameTextBox.Text 
$ProductKey = $ProductKeyTextBox.Text 
$XenToolsPath = $XenToolsPathTextBox.Text 
$Server2016CheckBox = $Server2016CheckBox.CheckState 
$Windows10Checkbox = $Windows10Checkbox.CheckState 
$DropDownEditionSelection = $DropDownEditionSelection.SelectedItem 
$DropDownTimeZone = $DropDownTimeZones.SelectedItem 
 
     if($MountLocation) {
 
        Foreach($MountedFile in $MountedFiles) {
 
            # Copy all contents from the mounted ISO to the target location keeping the same file/folder structure 
            if ($MountedFile.PSIsContainer) {
 
            Copy-Item $MountedFile.FullName -Destination (Join-Path $TargetFolder $MountedFile.Parent.FullName.Substring($MountLocation.length))
        
            } 
 
            else {
                
            Copy-Item $MountedFile.FullName -Destination (Join-Path $TargetFolder $MountedFile.FullName.Substring($MountLocation.length)) 
 
            }
 
        }
        
    }
 
    # Copy Autounattend.xml file into the root of the ISO
    if($AutounattendXML -and $TargetFolder) {
 
    Copy-Item $AutounattendXML -Destination $TargetFolder"\Autounattend.xml"
 
    # Specify which Windows version is being installed
    if($Server2016CheckBox -eq 'Checked') {
 
    $WindowsEdition = "Windows Server 2016 $DropDownEditionSelection"
 
    }
 
    elseif($Windows10Checkbox -eq 'Checked') {
    
    $WindowsEdition = "Windows 10 $DropDownEditionSelection"
    
    }
 
    # Define the contents of the Autounattend.xml file for later modification
    $DefaultXML = Get-Content $TargetFolder"\Autounattend.xml"
 
        $DefaultXML | Foreach-Object {
 
            # Replace the contents of the Autounattend.xml file with the information provided
            $_ -replace '1AdminAccount', $AdminAcct `
            -replace '1AdminPassword', $AdminPW `
            -replace '1ProductKey', $ProductKey `
            -replace '1XenToolsPath', $XenToolsPath.Substring($XenToolsPath.LastIndexOf("\")+1) `
            -replace '1Edition', $WindowsEdition `
            -replace '1TimeZone', $DropDownTimeZones
 
        } | Set-Content $TargetFolder"\Autounattend.xml"
 
    }
 
    # If it was specified to install XenServer Tools, copy the parent folder into the target
    if($XenToolsPath) {
    
    Copy-Item $XenToolsPath -Destination $TargetFolder -Recurse
    
    }
 
    if($MountedImage) {
 
    Dismount-DiskImage $SelectedISO
 
    } 
 
}

When the copy is complete, the ISO will be automatically unmounted as it is no longer needed. After unmounting the ISO, the custom ISO creation process begins by integrating the contents of the target folder.

This process usually takes about 3-5 minutes to complete depending on how large the original ISO is. The code snippet for the ISO creation process is shown below:

Function BuildISO {
 
$SelectedISO = $ISOPathTextBox.Text
$TargetFolder = $TargetFolderTextBox.Text
$NewISOName = $NewISONameTextBox.Text
$BootFile = $BootFilePathTextBox.Text
$ISOTool = $ISOToolPathTextBox.Text
 
# List of arguments to pass to oscdimg.exe
$ArgumentList = "-b$BootFile -u2 -h -m $TargetFolder $($SelectedISO.Remove($SelectedISO.LastIndexOf("\")))\$NewISOName.iso"
 
# Display in the form what ISO is being created and where
$ISOCopyProgressLabel.Text = "Creating $NewISOName.iso at $($SelectedISO.Remove($SelectedISO.LastIndexOf("\")))\"
 
# Create Custom ISO file. This turns the folder that contains the ISO and unattend into a new ISO file
Start-Process -WindowStyle Hidden -FilePath $ISOTool -ArgumentList $ArgumentList -Wait
 
}

To read more about the command line switches used with Oscdimg.exe in the $ArgumentList variable, check out these resources online.

After the custom ISO creation process is complete, you have the option to delete the target folder if desired and you can repeat this process as many times as you like.

The custom ISO is created in the same place where the base Windows ISO was stored. If you have an ISO storage repository in XenServer, I would recommend choosing an ISO from there since the new ISO will automatically get created in that location.

Conclusion

With a full understanding of the ISO Construction form and custom ISO creation process, there shouldn’t be anything holding you back from creating your own ISOs; if you haven’t yet downloaded AXL and the associated files, get them from GitHub here.

This is a fairly seamless process and can be used to create ISOs in under 15 minutes. You should have no problems creating an endless number of ISOs to play around with and this automation allows you to repeatedly build lab environments much faster and easier.

As mentioned in the beginning of this series, check out Part 3 where I will be covering the VM creation process.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

Checking System Readiness for the Bromium Platform

The Bromium Platform has several hardware and software requirements to fully function on an endpoint. Since the Bromium Client itself does not check many of these requirements until after installation, its difficult know ahead of time what machines require remediation prior to deployment.

To address this issue, I wrote PowerShell scripts to take an inventory of machines in your environment and compile a report using minimal infrastructure.

Requirements

The solution is designed to be deployed without depending on an endpoint management or software delivery platform. It does however require a scheduled task to run the Endpoint_CDABromiumReadiness.PS1 on each endpoint and a centralized file share where the script can save the collected inventory data. To summarize, the following components are necessary for this solution to work:

  • File Shares – Location for collected data
  • Scheduled Task – Executes the BromiumReadiness script
  • BromiumReadiness PowerShell script – Collects inventory data from endpoint
  • Compiler script – Aggregates collected data into a readable report

File Shares

The Endpoint_CDABromiumReadiness.PS1 collects inventory data from the endpoint and although it could be stored on the machine itself, it would require a significant amount of overhead to log into each machine and gather the data. To facilitate a simpler method of data collection, the script is designed to write the inventory data to a centralized file share. This file share can be one that already exists in your environment or can be created for the purpose of this solution.

The example that I used to create the file share where the script will store inventory data has these properties:

  • Name of folder: TestShare
  • Name of share: TestShare
  • Share permissions: Allow: Change, Read
  • Folder permissions: Allow: Create files / write data, Create folders / append data

Figure 1 – Share Permissions: Allow: Change, Read

NOTE: The name TestShare is used as an example. A more descriptive name would be preferable


Figure 2 – Folder Permissions: Allow: Create files / write data, Create folders / append data

The other file share will be a network location where the Endpoint_CDABromiumReadiness.PS1 PowerShell script can be stored for execution during the Scheduled Task. This file share can be a Read Only location as the script is only read from this location.

The example that I use for a file share location where I store this script is:

\dc01\ScriptShare\Endpoint_CDABromiumReadiness.ps1

Scheduled Task

Since there is no requirement to use a software delivery platform to deploy the Endpoint_CDABromiumReadiness.PS1, the simplest method for deployment and execution of the script is to use a Scheduled Task. Creating the scheduled task on each workstation would be time consuming and inefficient so the better approach would be to simply create the Scheduled Task through an Active Directory Computer Configuration GPO preference. An existing GPO or a new GPO can be used and needs to be linked to the OU or OUs that contain the workstations in the environment.

To create a Scheduled Task as a GPO preference, open the GPO using the Group Policy Management Console (GPMC) and navigate to:

Computer Configuration > Control Panel Settings > Scheduled Tasks


Figure 3 – GPO Preference – Scheduled Tasks

Right-Click “Scheduled Tasks” and choose New > Scheduled Task (Windows Vista and later)

A New Task (Windows Vista and later) Properties window should appear as follows:


Figure 4 – New Task (Windows Vista and later) Properties

Change the Action dropdown from Update to “Create”

Under the General tab, the following parameters should be entered:

  • Name: Bromium Readiness
  • User Account: NT AUTHORITY\System
  • Security Options: Run whether user is logged on or not
  • Security Options: Run with highest privileges
  • Hidden: Enabled

Figure 5 – General tab

Under the Actions tab, click “New” then in the New Action window, enter the following:

Program/Script:

C:\Windows\System32\WindowsPowerShell\v1.0\Powershell.exe

Add Arguments(optional):

-ExecutionPolicy Bypass -Command "& '\\<>\Endpoint_CDABromiumReadiness.ps1' -CopyToLocation '\\dc01\testshare\'"

Figure 6 – New Action window

NOTE: The name of the file server and shares are used as an example. Your UNC path would include the location of the Endpoint_CDABromiumReadiness.PS1 in a central file share and the data collection file share as created above. These UNC paths may not necessarily be the same.

Under the Triggers tab, click “New” then in the New Trigger window define the parameters for when to execute the scheduled task:


Figure 7 – New Trigger window

The script should be run at least once but it would be advised to not run the script continuously as the inventory data should only be necessary to collect information to assess the machine’s readiness to deploy the Bromium Client. It is not designed to be a maintenance task.

When the Scheduled Task executes, the Endpoint_CDABromiumReadiness.PS1 PowerShell script will gather the required information from the endpoints, generate a tsv file, and copy the file to the file share you set after the “CopyToLocation” parameter.

BromiumReadiness Script

This PowerShell script collects the inventory data from the endpoints and is contained here:


Figure 8 – BromiumReadiness script


Compiler Script

This PowerShell script aggregates the inventory data located in the file share from all of the individual tsv files generated from each endpoint into a single file that can be reviewed in Excel.

The Compiler_CDABromiumReadiness.PS1 is contained within the zip file.


Figure 9 – Compiler script

It is preferable to keep the Compiler script in the same file share as the tsv files that are generated so that it can be run as needed.


Figure 10 – Compiler script stored in file share

To execute the compiler script, open Windows PowerShell and run:

.\Compiler_CDABromiumReadiness.ps1

A finished report will look like this:


Figure 11 – Finished report

Running Multiple Rounds of Readiness Checks (Optional)

If the Scheduled Task runs multiple times, it will overwrite the inventory data that was previously collected for the endpoints. To prevent the data from getting overwritten, a method to keep previous data collections would be to run multiple rounds of readiness checks. This could also be important in a situation where you need to run the inventory more than once and you expect different results.

To do this, simply add the -ReadinessCheckRound parameter to the end of the execution of the BromiumReadiness script with a number indicating the round. This parameter is set to 1 by default and tags the tsv files. Notice in the image that the number 1 precedes the device name:


Figure 12 – Round number in tsv file name

And here is an example of the command line to use. Change the command line in the scheduled task that was created in the GPO to include the -ReadinessCheckRound parameter.

-ExecutionPolicy Bypass -Command “& ‘\\dc01\ScriptShare\Endpoint_CDABromiumReadiness.ps1’ -CopyToLocation ‘\\dc01\testshare\'” -ReadinessCheckRound 2

When the scheduled task runs again, the Endpoint_CDABromiumReadiness.PS1 script will generate tsv files with the round number preceding the name of the device:


Figure 13 – TSV files with multiple rounds

Add the -ReadinessCheckRound parameter when executing the Compiler script and the new report generated will show only data from that round.

.\Compiler_CDABromiumReadiness.ps1 -ReadinessCheckRound 2

Figure 14 – Compiled report from round

Sincerely,

Aman Motazedian
Senior Consultant
Critical Design Associates

LinkedIn Profile

Automating Lab Builds with XenServer PowerShell – Part 1 Understanding the Requirements

>>Part 1 – Understanding the Requirements
Part 2 – Creating a Custom ISO
Part 3 – Unlimited VM Creation
Part 4 – Roles, Features, and Other Components

Introduction

I was introduced to Citrix products in September of 2017 and have been working with it every day since. It seems that on a weekly basis I am exploring some new technology or testing an idea that requires a new lab component. I needed a way to automate tasks. My first target was the creation of new Virtual Machines (VMs).

The XenServer hypervisor is my preferred platform. While not particularly difficult, creating new VMs requires manual processes and time including considerations for machine setup, initial Windows installation, configuration, and other mediums.

Purpose
With all of the manual steps required to build a lab multiplied by the number of virtual machines being built repeatedly, I decided to create a tool, called Automated XenServer Labs (“AXL”), that takes the heavy lifting and user interaction involved in creating new VMs out of the equation to allow more time for actual lab work.

AXL leverages the XenServer PowerShell module and allows for a wide range of configurations. The PowerShell module allows for the creation and manipulation of VMs, Pools, Storage, Networks, etc. By utilizing this module, you can obtain greater efficiency and automation when, and if, creating new environments.

You may be wondering at this point how I plan to take away a large part of the user interaction to build a Windows VM, and to you I say, a custom-made ISO. An ISO, for those of you who may not know, is used to install an Operating System (OS).

The custom ISO has an unattended answer file in it, which I will talk about in further parts of this series, that allows for no user interaction during the installation process. AXL automates the creation of the Windows ISO to be used during the unattended OS installation and can also allow you to create an Active Directory domain, set IP addresses and names, and install specific server roles and features.

While AXL will automate the installation and configuration of the VMs, some initial user interaction is required to input how the VMs, ISOs, and Windows Features should be configured.  Given some of this configuration, the overall process will still be much quicker than manually building out all of the VMs.

The following sections will describe the components and infrastructure that are necessary for automating the build of a lab environment using AXL, the files and folders that are required to create the custom ISO, and using it to create a virtually unlimited number of VMs.

Components and Infrastructure
There are multiple items needed for AXL to work, the main items being the infrastructure. The first thing you will need is hardware with XenServer installed since AXL specifically uses the XenServer PowerShell module.

The hardware can be anything from a small form factor Intel NUC, to a custom-built or enterprise-grade server. XenServer can be downloaded for free from Citrix. You will need a My Citrix account to complete the process.

For development, I used the following items, though not all are necessary:

– (3x) Intel NUC Core i7 (XenServer Hosts)
– (1x) Ubiquiti EdgeRouter X
– (1x) HP J9028A ProCurve Switch 1800-24G
– (1x) Endpoint (Can be Windows Server or Windows Desktop OS)

As stated above, not all of these components are necessary since all you really need is a single machine to install XenServer, such as an Intel NUC (or some other server type device) and an endpoint to run AXL from.

NUCs have a very small form factor and are great for lab work, however; RAM and HD/SSD are sold separately, so you will have to account for that in the cost if you plan to use them. I use the switch and router to create different VLANs as needed.

A basic network topology of what I use is shown in Figure 1. I RDP to a VM on one of the NUCs and run AXL.
Explanation of VLAN
Explanation of RDP

The most important thing to note is the endpoint where AXL is running must be able to communicate with the XenServer host(s). This means that proper routing and firewall configurations must be in place prior to using AXL. NOTE: Firewall and routing configurations are outside the scope of this post and will not be covered.

Files and Folder Structure
There are a number of files needed for AXL to function. The files use are posted on GitHub, with the most important being the PowerShell script.

The files needed for the ISO creation process are as follows:

– autounattend.xml
– etfsboot.com
– oscdimg.exe
– Windows ISOs and associated licenses (MSDN, Visual Studio, etc)
– Expanded XenServer Tools (this is optional)

I won’t go in depth on any of these files in this part as they will be discussed in Part 2 of this series.

The only files that are required for the VM creation process is the PowerShell module for XenServer, which is actually a folder. Once you have all the required files and folders, you’ll need to put them either in a folder on the root of the C:\ drive, or in a folder on the Desktop.

I would recommend creating a folder just for the PowerShell script and all the necessary files so they are all easily accessible.

The folder structure I use is as follows:

Conclusion

To recap, you will want to get all the required files and folders for AXL and make a suitable folder structure. With the exception of the Windows and XenServer ISOs, all of the files and folders can be downloaded from GitHub.

You will want at least one endpoint (workstation or server) running AXL and one XenServer to host the VMs. With an understanding of the components, files, and folders, you should now have a solid grasp of the overall requirements for using AXL.

And don’t forget to check out Part 2 where the ISO creation process will be discussed in further detail.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile