Improve your Security, Compliance, and Performance

 

Stop wasting time and get easy access to information with strong Firewall Policy Management.

Let’s start off with some questions:

  • Is your firewall secure? Is it compliant? How do you know? Can you prove it?
  • Do you have quick and easy access to audit information and to data on the security impact of firewall policies?
  • Are your firewall policies effective?
  • Have your firewalls been optimized to eliminate unnecessary firewall policies from slowing down your network traffic?
  • How much time do you spend on change control associated with firewall policies?
  • How much time do you spend on false positives monthly?
  • Is automation and orchestration part of your current or future strategy?

To tackle the chaos associated with network security policies with emphasis on firewalls, CDA takes a wholistic approach to the solution addressing the people, processes and platforms required to maintain a disciplined and structured program.

In this blog article, we share best practices around developing a cost efficient, highly secure, and sustainable “Firewall Policy Lifecycle Management” Program (“FPLM”).

What if you had this level of visibility and more?

Love-Hate Relationship

We love firewalls and we hate firewalls. We love the protection they provide but hate the level of scrutiny and administration required to keep them compliant. Firewalls are a critical part of any corporate IT environment.

Firewalls and the associated network security policies provide protection and visibility to and from the world outside of your corporate walls. While firewalls are vital to protection, they can also become your worst enemy when poorly managed or maintained.

Getting your arms around managing multiple firewall vendors, policies, administration, and the agility required by your business objectives can be a daunting task for any organization.

Solution

To address the challenges of achieving compliance, security, performance, and operations aligned with IT and security objectives you need a program to manage the lifecycle. We call this a “Firewall Policy Lifecycle Management Program” or “FPLM Program” for short.

The high-level goal of this plan is to develop a policy and implement that policy in such a way that the policy is visible and easily understood and used as part of the platform to govern and report on risk.

What you see in the image below is an example of how we help customers map their “zone to zone” communications and evaluate security risks.

Practical Application of Enterprise Security Policy

FPLM Best Practices

CDA Best Practices for developing a strong FPLM Program follow 4 repeatable steps:

  1. Analysis
  2. Design
  3. Build & Test
  4. Knowledge Transfer

In each step we address people, process and platform.

Analysis

The goal of the analysis phase is to understand the roles and responsibilities of the people supporting the environment, review the requirements and use cases driving the current firewall policies, and to evaluate current processes and platforms.

This simple step can make a huge impact – streamlining change control procedures, reducing false-positive notifications, improving network performance and clearly communicating the requirements for accurate and auditable information needed to prove the security of the environment.

Deliverables should include:

  • RACI (Responsible, Accountable, Consulted, Informed) Matrix to identify roles and responsibilities
  • Document the security, compliance and audit requirements and the use cases they address
  • Evaluate existing processes and any automation and orchestration in use
  • Evaluate the infrastructure
  • Build a detailed inventory list including business and technical contacts
  • Identify gaps, redundancies and opportunities for reduction in processes and platforms.

Design

CDA provides multiple firewall policy design services from green field to re-design protecting the devices in your organization from unwanted network traffic that gets through the perimeter defense or that originates from inside your network.

Our goal – improve visibility and security of your firewall policies while clearly defining risk and reward for the design recommendations.

  • Design processes based on roles and responsibilities to support use cases and requirements. Empower your employees to do meaningful work and provide the visibility and guidance your organization needs for good teaming.
  • Design a unified security policy that meets compliance and corporate objectives.
  • Design platforms required to support the separation of duties, use cases and requirements.
  • Prioritize design recommendations with risk, reward and cost analysis.

Build & Test

Build work should be conducted in line with the prioritized recommendations identified during the design. Implement fast, easy, low cost wins first when security and compliance are not factors. Offload the work your team doesn’t want to do.

  • Build the workflows to support consistent deployment
  • Build the core components for the platform
    • Firewall and Network Policy Management Platform
  • Build the components necessary to support the processes and platforms:
    • Change Orders, Ticketing/Support
    • Alerting, Monitoring, Reporting
  • Ingest and configure the devices to be monitored and managed
    • Firewalls
    • Syslog Configurations
    • Reporting/Alerting
    • Ticketing/Support
  • Test the workflows and processes to ensure all requirements are met and documented.
  • After testing, the overall program is ready for deployment.

Knowledge Transfer

It’s easy for IT teams to focus on the hardware, software, and technology needed to do the job for them and to neglect building structured processes and assigning roles and responsibilities to the most capable teams.

This neglect creates a lot of risk and audit issues for businesses that are governed by regulations e.g. DISA STIG, GDPR, HIPPA, SOX or NIST.

Not only do we transfer knowledge on the platform (hardware and software) we educate the team on the agreed upon processes needed to maintain compliance above and beyond the actual firewall configuration tasks.

Knowledge transfer and communication across the IT team is important to success. Below is a table outlining the roles to target and the benefits they’ll see:

Join Us for Our Upcoming FPLM Virtual Lunch & Learn

This is an interactive workshop that we will tailor to the audience. Come prepared with questions! The agenda follows:

• Introductions
• Firewall Policy Lifecycle Management Best Practices
• Demonstration of Common Tools
• Ask the Experts

With regard to the discussion and demonstrations we will be presenting on the people, processes, and platforms required to orchestrate the FPLM program in your organization.

The platform we will demonstrate includes Cisco ASA, Palo Alto, Cisco FirePower, and Tufin technologies. We will demonstrate useful workflows that align with a typical business need for managing firewall policies.

Thursday, July 30, 2020
1:30PM – 2:30PM EST

Reserve My Seat

Thank you and I look forward to meeting you at our Lunch & Learn.

Sincerely,

Anthony DiDonato
Principal Architect
Critical Design Associates

LinkedIn Profile

Privileged Account Discovery Script: Reduce Privilege Escalation Attacks

Overview

Privileged accounts are accounts on computer systems with more access than standard user accounts. These accounts, for example, can execute processes in the system context, run system-wide services, or modify system configuration files.

Privileged accounts are often targets for privilege escalation attacks, where attackers are able to gain access to network-wide resources after making a beachhead on a system using a standard user account.

The Story of the Discovery Script

There are several great tools out there for discovering and managing privileged accounts. I was determined to find a free tool that would provide the level of detail I was looking for.

After conducting research, I could not find what I was looking for so I decided to write a custom script.

Download Script: Privileged Account Scanner V1

This script focuses on six main types of Windows privileged accounts:

  1. Windows Local Administrator Accounts
  2. Windows Service Accounts
  3. Windows Scheduled Task Accounts
  4. Windows COM+ Application Service Accounts
  5. Windows DCOM Application Service Accounts
  6. Microsoft SQL Accounts

The Script requires Windows PowerShell Remoting to be enabled.

Furthermore, the account you execute the script with must have Local Administrator privileges on the target system, and GRANT CONTROL SERVER on SQL servers.

“I could not find what I was looking for so I decided to write a custom script”

Provide an array of computer names to the parameter ListOfTargets and the script will gather privileged account information on each of the target computers.

The result will be a CSV file generated in the TEMP folder. That path can be modified with the ReportExportPath parameter, as seen in the below command.

.\PrivilegedAccountScanner.ps1 -ListOfTargets “DB01”,”ERPM01” -ReportExportPath “C:\users\SuperAman\desktop\”

Running this command produces a report that looks like this:

In this example report you see examples of most of the types of accounts the script scans for. Below are the columns found in the report and a brief description of each:

  • ComputerName – The computer targeted for scanning.
  • Account – The name of the discovered privileged account.
  • Type – Shows which of the six types of account this account falls under.

The data in the name and note columns will change depending on the type of account.

Additionally, below is an outline of how different account types affect other columns:

  • Local Admins
    Shows “N/A” for name, and the type of account discovered. Above you see that the account is actually a group.
  • Service Accounts
    The Name column shows the service name and the note column shows the service description.
  • Scheduled Tasks
    The name column is the name of the Scheduled task and the note column will display “N/A”.
  • COM+ and DCOM
    Application accounts, the Name column shows the application name and the note column is the application key.
  • SQL Accounts
    The name column shows the associated SQL Instance and the Note column shows a summation of what roles and explicit permissions are assigned to the account./

Customizing Data

You can do further customization of the data your collecting by modifying array variables defined near the top of the script, as shown below.

Broaden or Focus Discovery Scan

The following are arrays that can be modified depending on your reporting needs.

  • The $FilterArray is a list of accounts that are ignored during the discovery scan
  • The $FilterSQLBuiltinAccounts is the list of built in SQL Account to ignore
  • The $SQLPermissions is a list of SQL permissions to look for when scanning SQL
  • The $SQLRoles is a list of SQL roles to look for when scanning. Any SQL users that are members of these roles will be captured

“By adding or removing elements of these arrays, you can broaden or focus your discovery scan.”

Let’s Continue the Conversation

I set out to develop a flexible scanning script that can provide actionable data on privileged accounts in your environment. However, I am sure there are scenarios, configurations, and use cases that I missed.

I look forward to feedback and any requests for additional functionality. Do you have a suggestion? Please leave it in the comments below and we will continue the conversation.

Sincerely,

Aman Motazedian
Senior Consultant
Critical Design Associates

LinkedIn Profile

Optimizing Windows 10 Upgrades with Ivanti Endpoint Manager (EPM)

Introduction

In a recent customer engagement, the client had requested to upgrade Windows 10 workstations within their environment using Ivanti Endpoint Manager (EPM.)

Ivanti has a recommended method to upgrade Windows 10 workstations to newer versions through their service pack definitions.

The service pack definitions are found in the Patch and Compliance tool and can be used to determine if an endpoint can receive the upgraded version of Windows. The service pack definition defines an ISO for the deployment, which cannot be downloaded via the Patch and Compliance tool.

The ISO must be downloaded separately and renamed to match what is configured in the definition. There are both pros and cons to using the recommended method:

ISO Method

Pros:

  • Easy to deploy
  • Simple configuration

Cons:

  • Space requirements (2x ISO size)
  • Large performance impact
  • Poor end-user awareness

When deploying any patch or distribution package, it is important to do so consistently each time to achieve expected results.

For this reason, I developed a Software Distribution method that would offer versatility and consistency with any Windows 10 upgrade. There are pros and cons to this method as well:

Software Distribution Method

Pros:

  • Fewer space requirements (1x ISO size)
  • Full end-user awareness
  • No performance impact

Cons:

  • More involved configuration
  • Leaves machine unusable for the duration of the deployment

Deploying Windows 10 Upgrades via Patch and Compliance

Ivanti’s recommended method for upgrading Windows 10 is fairly straightforward for the setup and deployment.

After the ISO is named according to what is configured in the definition file, all that is left to do is deploy it to targeted endpoints.

The Patch and Compliance deployment, after scheduling the repair and starting the task, is as follows:

  1. Copy the ISO to the machine (download ISO here)
  2. Mount the ISO and extract the contents
  3. Unmount the ISO and start the upgrade process with the now local files

As previously mentioned, Ivanti’s recommended method for deployment has some cons.

First, it is required to have twice the disk space on the endpoint for storing the ISO and the extracted contents; that can easily amount to 8GB or more.

Once the installation starts, a large performance impact will be seen as the upgrade will start using most of the machine’s resources.

Lastly, there is poor end-user awareness as to what is actually happening. EPM does have the capability to provide prompts to the end user with the correct agent settings; however, when using those settings there is still no indication of the progress of the deployment.

Deploying Windows 10 Upgrades via Software Distribution

Ivanti’s Windows 10 upgrade method using Patch and Compliance works, but in this case, the customer needed something that was more user friendly and did not have any impact on performance.

This is how the Software Distribution method ensued. The Software Distribution method makes use of two custom batch files.

The first batch file used in the deployment, in this case, named GetUserName.bat, is used to simply get the username of the currently logged-in user if there is one; the username will be output into a temporary text file called Username.txt.

By default, when creating a distribution package, it will run under the SYSTEM account.

This particular package, however, will run under the current user account; this is important for the next batch file in the process. The contents of the GetUserName.bat file can be seen below.

REM -- If C:\Temp doesn't exist, create it and output the current user to Username.txt
REM -- Since the task is running under the current users context, a file will only get
REM -- created if there is a user logged in

if not exist C:\Temp (
mkdir C:\Temp
echo %username% > C:\Temp\Username.txt
) else (
echo %username% > C:\Temp\Username.txt
)

The second batch file, which will be named Windows10Upgrade.bat, will use the Username.txt output from the previous batch file if it exists.

If the Username.txt file exists, a scheduled task will be created to execute setup.exe that gets copied to the clients.

Setup.exe is the main executable in a Windows ISO that installs and configures the OS with the parameters you define.

The scheduled task will be created to run in the current user’s context with the highest privileges and will execute one minute from the time it is created.

Running the task with the highest privileges is a requirement, otherwise, the scheduled task will fail. The reason a scheduled task is created is to allow the user to see the GUI operation of the upgrade; if setup.exe was executed under the SYSTEM context, the currently logged in user would not see anything.

If there is no Username.txt file, setup.exe will just run under the SYSTEM context as that is the default for the distribution package. The contents of the Windows10Upgrade.bat file can be seen below.

REM -- Set the 'name' variable to whatever is in the text file, if it exists
REM -- This text file only gets created if there is a user currently logged in

set /p name=<C:\Temp\Username.txt

REM -- Get the time in 24 hour format, add one minute, and assign it to the 'hhmm' variable

set hh=%time:~0,2%
set mm=%time:~3,2%
set /A mm=%mm%+1
if %mm% GTR 59 set /A mm=%mm%-60 && set /A hh=%hh%+1
set P=00%mm%
if %mm% LSS 10 set mm=%P:~-2%
if %hh% == 24 set hh=00
if "%hh:~0,1%" == " " set hh=0%hh:~1,1%
set hhmm=%hh%:%mm%

REM -- If the Username.txt exists, that means a user is logged in, so create a scheduled task
REM -- Set the scheduled task to run with the highest privileges and under the currently logged in user
REM -- This will ensure an update prompt is seen by the user during the upgrade
REM -- Otherwise, just run setup.exe as SYSTEM since no user is logged in and Username.txt does not exist

if exist C:\Temp\Username.txt (
schtasks /create /s %computername% /tn "Windows Upgrade" /sc once /tr "%cd%\Setup.exe /Auto Upgrade /Telemetry Disable /ShowOOBE none /DynamicUpdate disable" /st %hhmm% /rl highest /ru %userdomain%\%name%
del C:\Temp\Username.txt
) else (
Setup.exe /Auto Upgrade /Telemetry Disable /ShowOOBE none /DynamicUpdate disable
)

While the batch files, along with the ISO itself, are the main components of this deployment method, below is a list of items and configurations needed for this deployment method:

  • Windows 10 ISO (Extracted to a folder)
  • GetUserName.bat (In the same folder as the Extracted ISO)
  • Windows10Upgrade.bat (In the same folder as the Extracted ISO)
  • IIS MIME type for Default Website
    • Type: application/octet
    • Extension: .

This method allows for a seamless, quick, and efficient deployment that will provide the end-users with a good experience if logged in during the deployment.

If they are logged in, they will have full insight into what is happening. The general process for the entire deployment is as follows:

  • The task starts and either begins the download on the client or starts executing the batch files if already downloaded
    • GetUserName.bat runs and outputs a Username.txt file to C:\Temp that contains the username of the currently logged-in user if there is one. A file does not get created if there is no user logged in.
    • Next, Windows10Upgrade.bat will run and determine if there is a Username.txt file
      • If there is a Username.txt file, a scheduled task will be created for the current user, obtained from the Username.txt file
      • If there is no Username.txt file, setup.exe will run under the SYSTEM context as is the default for the package
    • The machine will transition to a blue screen showing the progress of the installation after about 30-45 seconds and will make the computer unusable for approximately 45min-1.5h; time can also vary depending on hardware capabilities

As you can see, the process is fairly straight forward and if anything gets created, such as the Username.txt file and scheduled task, it will be cleaned up.

To make this process more user friendly, one can also pair this entire deployment with notification messages or deferment timers to provide more control to the end-user.

These are a few examples of the flexibility that EPM offers. Below is a short video of the deployment and demonstration of how it works and is setup.

Ivanti Endpoint Manager (EPM) Demo & Deployment Video

In Conclusion

Thank you for reading and please feel free to reach out if you have questions, comments, or concerns about the information presented in this article.

Zach Thurmond
IT Consultant
Critical Design Associates

LinkedIn Profile

Application Control for Enhanced Endpoint Security

[Video Transcription]
Hi, Trevor Prokop here with Critical Design Associates and today I’d like to demonstrate how you can leverage Ivanti Application Control to improve your endpoint security. Application Control is the security product within Ivanti User Workspace Manager Suite or UWM for short.

Application Control has many key features but today I’d like to focus on trusted ownership which is based on the core of Microsoft NTFS security permissions. Simply stated trusted ownership is a feature that prohibits software from launching if it was not placed on the workstation by a trusted owner by default if the NTFS file owner is not one of four trusted owners as seen in the screenshot to the right the application cannot run.

This list can also be populated with a service account for software delivery via Ivanti Endpoint Manager or Microsoft SCCM. Application Control uses secure filter drivers and Microsoft NTFS security policies to intercept all execution requests.

Execution requests go through the App Control, hook and any unwanted applications, and are blocked as you can see in the screenshot. Administrators install this application and therefore users will be permitted to execute it.

In addition to executable files, Application Control also manages entitlement to PowerShell batch, vbscript, registry files, a number of other items as well. So you can see here in this screenshot of the configuration validated PowerShell scripts here which will deny PowerShell.XE or PowerShell_ise.EXE or if a ps1 file is executed by a user. If the file is not owned by a trusted owner then it won’t be permitted to run.

Business cases around this I’d like to demonstrate are prohibiting users from installing and executing portable or third party applications. Prohibit malware execution from malicious attachments and also to prohibit file-less malware execution via PowerShell. These are a few business cases I’d like to demonstrate and also how to be protected using Ivanti Application Control.

So let’s get into the demo. What we have here I have two VMs. Both are Windows 10 64 bit, one has Application Control installed and one does not. Here is no Application Control just the normal environment that people would access.

So what I’d like to show is a user trying to download the uTorrent portable application to see if they can click it and install the application. As you see I already completed the installation. I can execute this UTorrent as a non-administrative as well.

So we have uTorrent running and switch over to an App Control managed system, you can see that uTorrent was downloaded and when I go to execute it is denied. This user is not authorized and when trusted ownership comes into play look at the properties and view the security tab. The Advanced tab you inform you that the owner is a user. This particular user was not one of the four trusted owners therefore will be unable to execute the application.

In our demo of how trusted ownership permits applications to run, we can launch Word and the user has no problem. The reason being is if we look at the office folder and look at windward and view the security tab and then view the advanced tab, you notice Word was placed there by the system account. This verifies the owner and therefore it’s able to run.

Another example to take a look at here is file-less malware a lot of times nowadays that PowerShell is used to download code from the internet and execute it on the workstations.

So you can see here in this instance I’m able to launch PowerShell. I have a PowerShell window waiting here it’s going to execute this line of code. This is a one-liner to invoke mimikatz. If we had App Control loaded, the user would not able to execute PowerShell. What also comes into play here is running the actual ps1 file since that file itself is not a trusted owner.

These are two examples. The last one I’d like to execute is a Word document we download that we received via Outlook. We want to execute it and we just enable macros.

We’re just waiting a few minutes there you notice the file name has changed. While we are waiting for that we could potentially have more payloads being downloaded. As you see we now have cryptolocker loaded our files are encrypted.

What I’d like to do is demonstrate what it would look like on a workstation with Application Control enabled. We can launch that same document and we will also enable content just the same and you see the file name did change. This user is not authorized to execute cryptolocker so there you can see how we can just close out of that document. Application Control protected us. If we switch over here these files here are encrypted.

Thank you that concludes my demo. Thank you for your time.

Trevor Prokop
Architect & Director of Professional Services
LinkedIn Profile






<<Back to CDA Blog

Checking System Readiness for the Bromium Platform

The Bromium Platform has several hardware and software requirements to fully function on an endpoint. Since the Bromium Client itself does not check many of these requirements until after installation, its difficult know ahead of time what machines require remediation prior to deployment.

To address this issue, I wrote PowerShell scripts to take an inventory of machines in your environment and compile a report using minimal infrastructure.

Requirements

The solution is designed to be deployed without depending on an endpoint management or software delivery platform. It does however require a scheduled task to run the Endpoint_CDABromiumReadiness.PS1 on each endpoint and a centralized file share where the script can save the collected inventory data. To summarize, the following components are necessary for this solution to work:

  • File Shares – Location for collected data
  • Scheduled Task – Executes the BromiumReadiness script
  • BromiumReadiness PowerShell script – Collects inventory data from endpoint
  • Compiler script – Aggregates collected data into a readable report

File Shares

The Endpoint_CDABromiumReadiness.PS1 collects inventory data from the endpoint and although it could be stored on the machine itself, it would require a significant amount of overhead to log into each machine and gather the data. To facilitate a simpler method of data collection, the script is designed to write the inventory data to a centralized file share. This file share can be one that already exists in your environment or can be created for the purpose of this solution.

The example that I used to create the file share where the script will store inventory data has these properties:

  • Name of folder: TestShare
  • Name of share: TestShare
  • Share permissions: Allow: Change, Read
  • Folder permissions: Allow: Create files / write data, Create folders / append data

Figure 1 – Share Permissions: Allow: Change, Read

NOTE: The name TestShare is used as an example. A more descriptive name would be preferable


Figure 2 – Folder Permissions: Allow: Create files / write data, Create folders / append data

The other file share will be a network location where the Endpoint_CDABromiumReadiness.PS1 PowerShell script can be stored for execution during the Scheduled Task. This file share can be a Read Only location as the script is only read from this location.

The example that I use for a file share location where I store this script is:

\dc01\ScriptShare\Endpoint_CDABromiumReadiness.ps1

Scheduled Task

Since there is no requirement to use a software delivery platform to deploy the Endpoint_CDABromiumReadiness.PS1, the simplest method for deployment and execution of the script is to use a Scheduled Task. Creating the scheduled task on each workstation would be time consuming and inefficient so the better approach would be to simply create the Scheduled Task through an Active Directory Computer Configuration GPO preference. An existing GPO or a new GPO can be used and needs to be linked to the OU or OUs that contain the workstations in the environment.

To create a Scheduled Task as a GPO preference, open the GPO using the Group Policy Management Console (GPMC) and navigate to:

Computer Configuration > Control Panel Settings > Scheduled Tasks


Figure 3 – GPO Preference – Scheduled Tasks

Right-Click “Scheduled Tasks” and choose New > Scheduled Task (Windows Vista and later)

A New Task (Windows Vista and later) Properties window should appear as follows:


Figure 4 – New Task (Windows Vista and later) Properties

Change the Action dropdown from Update to “Create”

Under the General tab, the following parameters should be entered:

  • Name: Bromium Readiness
  • User Account: NT AUTHORITY\System
  • Security Options: Run whether user is logged on or not
  • Security Options: Run with highest privileges
  • Hidden: Enabled

Figure 5 – General tab

Under the Actions tab, click “New” then in the New Action window, enter the following:

Program/Script:

C:\Windows\System32\WindowsPowerShell\v1.0\Powershell.exe

Add Arguments(optional):

-ExecutionPolicy Bypass -Command "& '\\<>\Endpoint_CDABromiumReadiness.ps1' -CopyToLocation '\\dc01\testshare\'"

Figure 6 – New Action window

NOTE: The name of the file server and shares are used as an example. Your UNC path would include the location of the Endpoint_CDABromiumReadiness.PS1 in a central file share and the data collection file share as created above. These UNC paths may not necessarily be the same.

Under the Triggers tab, click “New” then in the New Trigger window define the parameters for when to execute the scheduled task:


Figure 7 – New Trigger window

The script should be run at least once but it would be advised to not run the script continuously as the inventory data should only be necessary to collect information to assess the machine’s readiness to deploy the Bromium Client. It is not designed to be a maintenance task.

When the Scheduled Task executes, the Endpoint_CDABromiumReadiness.PS1 PowerShell script will gather the required information from the endpoints, generate a tsv file, and copy the file to the file share you set after the “CopyToLocation” parameter.

BromiumReadiness Script

This PowerShell script collects the inventory data from the endpoints and is contained here:


Figure 8 – BromiumReadiness script


Compiler Script

This PowerShell script aggregates the inventory data located in the file share from all of the individual tsv files generated from each endpoint into a single file that can be reviewed in Excel.

The Compiler_CDABromiumReadiness.PS1 is contained within the zip file.


Figure 9 – Compiler script

It is preferable to keep the Compiler script in the same file share as the tsv files that are generated so that it can be run as needed.


Figure 10 – Compiler script stored in file share

To execute the compiler script, open Windows PowerShell and run:

.\Compiler_CDABromiumReadiness.ps1

A finished report will look like this:


Figure 11 – Finished report

Running Multiple Rounds of Readiness Checks (Optional)

If the Scheduled Task runs multiple times, it will overwrite the inventory data that was previously collected for the endpoints. To prevent the data from getting overwritten, a method to keep previous data collections would be to run multiple rounds of readiness checks. This could also be important in a situation where you need to run the inventory more than once and you expect different results.

To do this, simply add the -ReadinessCheckRound parameter to the end of the execution of the BromiumReadiness script with a number indicating the round. This parameter is set to 1 by default and tags the tsv files. Notice in the image that the number 1 precedes the device name:


Figure 12 – Round number in tsv file name

And here is an example of the command line to use. Change the command line in the scheduled task that was created in the GPO to include the -ReadinessCheckRound parameter.

-ExecutionPolicy Bypass -Command “& ‘\\dc01\ScriptShare\Endpoint_CDABromiumReadiness.ps1’ -CopyToLocation ‘\\dc01\testshare\'” -ReadinessCheckRound 2

When the scheduled task runs again, the Endpoint_CDABromiumReadiness.PS1 script will generate tsv files with the round number preceding the name of the device:


Figure 13 – TSV files with multiple rounds

Add the -ReadinessCheckRound parameter when executing the Compiler script and the new report generated will show only data from that round.

.\Compiler_CDABromiumReadiness.ps1 -ReadinessCheckRound 2

Figure 14 – Compiled report from round

Sincerely,

Aman Motazedian
Senior Consultant
Critical Design Associates

LinkedIn Profile

SWIFT Wire Transfer Security for the Banking Industry

 

Banks and financial institutions across the world have lost millions of dollars since 2015 due to cybercrime, with one of the primary targets being the SWIFT wire transfer system.

To combat this crisis SWIFT has presented new guidance for financial institutions to help with maintaining proper security standards and to ensure criminals cannot gain unauthorized access to financial systems.

Banks are a commonly targeted by cybercriminals. In addition to constant phishing threats, banks deal with security issues regarding legacy browsers, operating systems, and applications. The dependence on legacy technology often allows attackers to easily gain access to valuable assets.

Cyber-attackers used these legacy applications, phishing, and other techniques to gain access to multiple banks across the world allowing them to successfully steal hundreds of millions of dollars. The cyber-attackers were extremely careful and patient monitoring the individual banks’ environments for months allowing them to gain multiple credentials, record regular SWIFT transfers and plan virtually untraceable transactions.

How can banks protect themselves against these types of attacks?

Critical Design Associates has had success assisting top banks in the United States build secure delivery platforms with a particular emphasis on SWIFT and wire transfer systems. Delivering a secure platform requires many layers of security and a clear understanding of the wire transfer security requirements.

What is our solution? Critical Design recommends a multi-layered approach using the people, process, platform approach. These solutions often focus on the following areas:

People: Reviewing the existing system user authentication, authorization, and auditing roles within the organization and verifying that access is effectively limited and monitored by design.

Process: Reviewing, building or modifying the existing processes to achieve a more secure and auditable system with as many security barriers and sensors as possible to thwart fraudulent behavior. We also work with our customers to ensure effective response processes are in place based on the “assume you are breached” concept.

Platform: Building secure enclaves, thereby limiting access to “locked down” workstations, servers, data, and networks where wire transfers are performed. This is achieved by:

  • Applying industry standard security configurations
  • Implementing Multi-factor Authentication (“MFA”) and requiring multiple forms of authentication and authorization, above and beyond the typical two-factor systems where only a username, password and token code is required (i.e. Vasco and RSA). The design model we deliver adds additional layers of authentication, authorization, and auditing by adding group level, device level, and application level restrictions. This minimizes the attack surface significantly.
  • Deploying Next Generation Anti-Virus (“NGAV”): NGAV not only protects against commodity malware but is also able to prevent novel attacks by evaluating behavior and context. This provides visibility to identify all vital information in the “kill chain” to remediate the attack.
  • Implementing Application Controls/Whitelisting: While this approach is not “fool proof” it does provide for additional security and another opportunity to trigger on abnormal or malicious activity. Application Control provides tools to determine which users, systems, and applications can communicate at a very granular level.
  • Deploying Non-Persistent Virtualization: A typical attack often requires persistence to be established by the attacker. With non-persistent virtualized computing environments we are able to protect against many forms of attacks where persistence is required.
  • Deploying Micro-Segmentation solutions for desktops and servers: This allows for greater control, sensor placement, and containment of threats. Granularly restricted network access protects against an attacker’s ability to establish command and control (“C2”). Command and control is a typical tactic employed by criminals after persistence is established. Micro-segmentation allows security policies to be defined explicitly.
  • Employing granular logon and session timeout controls: Time-based session controls add security by determining when and how long a user can have specific applications open before requiring login credentials to be entered again. It also allows for greater monitoring of when computers and applications are being accessed; abnormal access times indicate suspicious behavior and trigger alerts.

Can you confidently say your environment is truly secure? Are you prepared for a cyber-attack?
Let’s make sure your wire transfer system is not vulnerable to these attacks.

Contact us below to continue the conversation