Operationalizing Micro-segmentation – NSX Securing “Anywhere” – Part III

Welcome to part 3 of the Micro-Segmentation Defined – NSX Securing “Anywhere” blog series. This installment covers how to operationalize NSX Micro-Segmentation. Be sure to check out Part 1 on the definition of micro-segmentation and Part 2 on securing physical workloads with NSX.

This blog covers the following topics:

  1. Micro-segmentation design patterns
  2. Determining appropriate security groups and policies
  3. Deploying micro-segmentation
  4. Application lifecycle management with vRealize Automation and NSX
  5. Day 2 operations for micro-segmentation

Micro-segmentation design patterns

Micro-segmentation can be implemented based on various design patterns reflecting specific requirements. The NSX Distributed Firewall (DFW) can be used to provide controlled communication between workloads independent of their network connectivity. These workloads can, for example, all connect to a single VLAN. Distributed logical switches and routers can be leveraged to provide isolation or segmentation between different environments or application tiers, regardless of the underlying physical network, as well as many other benefits. Furthermore, the NSX Edge Service Gateway (ESG) can provide additional functionality such as NAT or load balancing and the NSX Service Insertion framework enables partner services such as L7 firewalling, agent-less anti-virus or IPS/IDS applied to workloads that need additional security controls.

Figure 1: Leveraging the DFW to provide granular control within a single network segment.

Choosing an appropriate design pattern is an important decision when preparing to operationalize micro-segmentation. The benefits of using overlay-based virtualized networking and the potential need for additional security controls should be considered to make the appropriate design choice.

Figure 2: Distributed Logical Routing, firewalling and partner service insertion

Determining appropriate Security Groups

Figure 3: Security Groups and Policies

While Security Policies determine how something should be secured, Security Groups determine what is secured. Security groups can be defined based on many different types of criteria, including network constructs such as IP addresses, infrastructure constructs such as Logical Switches or application constructs such as virtual machines, which can be added to a security group statically or dynamically, for example, based on the presence of a particular security tag.

Security tags are a way to label workloads. Labelling workloads can be done manually by the administrator to identify a particular workload as being part of a PCI environment. 3rd party NSX partner services such as anti-virus or vulnerability management can also tag as particular VM based on a certain condition such as a vulnerability found on the workload.

Figure 4: Example of Dynamic Security Group membership based on multiple criteria

A Virtual machine can be a member of multiple security groups, which allows for multiple levels of segmentation to be applied to all applications. Security groups can be used to specify if an application is deployed in production or in a development environment, while another security group applied to the same workload determines if it’s connected to a web-tier, application-tier or DB-tier logical switch, and a 3rd security group can be leveraged to specify the application or application instance the workload virtual machine is part of. The security policies applied to each of these security groups are combined and all applied to the workload that are a member of the security groups. When a new application is on-boarded, it’s workload can then be added to the appropriate security groups. Instead of using this layered approach to security groups, it is also possible to create security groups specific to an individual application’s tiers.

Determining appropriate Policies

Determining what the appropriate security groups and firewall policies are for numerous complex applications in an organization can be challenging. Applications, both custom or off-the-shelf may not be documented very well, making it hard to determine what communication paths (and relevant firewall rules) need to be opened for the application to function while ensuring all other ports are closed to adhere to a least privilege strategy with a micro-segmentation architecture.

Gathering information about the application and its connectivity requirements by investigating its documentation or working with the application team is one way to perform the necessary application discovery. However, several practices and tools exist to make this application discovery process easier.

One option is to investigate connection logs. This process consists of creating a catch-all firewall policy with a logging action, applying that policy to the application we are on-boarding and then investigate the firewall connection logs to create the granular rules required for this application and creating a default deny rule applied to this application.

Figure 5: Using the Log Insight Field Table for application discovery


Figure 6: Using the Log Insight Field Table for application discovery

vRealize Log Insight can be leveraged for application discovery through connection log investigation, along with enabling the Logging action in the Distributed Firewall. The use of scripting along with firewall logs also makes it possible to clean up, de-duplicate and parse through the logs and automatically generate recommended firewall policies based on the observed connections.

Another option for determining appropriate security groups and firewall policies is using the Arkin micro-segmentation platform. This solution collects IPFIX data from Distributed Virtual Switches in the datacenter and provides network flow assessment and analytics. The network flow analytics helps to determine what the right security groups and firewall rules to achieve a 0-trust architecture are.

Figure 7: Arkin Flow Analysis

The Arkin micro-segmentation planner organizes virtual machines into logical groups based into logical groups based on compute and network visibility and provides a blueprint to put security groups and firewall rules in place. The analysis, modeling and visualization provided by Arkin make the process of operationalizing micro-segmentation with the right security groups and firewall rules very straightforward.

Deploying micro-segmentation

After we have decided on what design pattern fits the requirements for our environment, we can start with the actual NSX installation. The installation process is covered in detail in the NSX installation guide. Once the NSX manager is installed, clusters can be prepared with NSX. Once hosts are prepared, a default Distributed Firewall policy is applied to all the prepared clusters.

NSX can be deployed in a net-new datacenter (also called a greenfield) or a brownfield datacenter where applications have previously been deployed. The main difference between deploying micro-segmentation in a greenfield environment versus a brownfield environment is that in a brownfield environment we need to ensure existing application connectivity and availability is not compromised when micro-segmentation policies are put in place. That is why upon deployment of NSX, the Distributed Firewall is configured with a default-allow policy. The next step in deploying micro-segmentation is creating a granular firewall policy and apply it to existing application or to applications as they are being on-boarded in case of a greenfield environment. At the same time, network overlays can be implemented to provide distributed virtual routing and partner services can be deployed to provide additional security controls.

Application Lifecycle Management with vRealize Automation and NSX

Traditional ticket-based IT can no longer support the increased agility required by lines of business and the dynamic nature of the cloud. New applications need to be on-boarded quickly, and in an automated self-service way, freeing up time for innovation rather than manual implementation. While this self-service model has been well understood for provisioning of workloads; configuration of the appropriate networking and security has often been a more manual process.

NSX is fully integrated with vRealize Automation and can be integrated with other Cloud Management Platforms through the NSX RESTful API. With vRealize Automation, the provisioning of network and security services can be done in lockstep with application on-boarding. Security controls are deployed as part of the automated delivery of an application. The benefits of automation include:

  • Faster application delivery through a standardized and repeatable process
  • Greater reliability and consistency
  • Reduced Opex by eliminating manual configuration tasks

Figure 8: vRealize Automation and NSX

With vRealize Automation and NSX, the administrator can define vRealize Automation application blueprints that specify NSX security policies for each application and application tier. These security policies include native Distributed Firewall rules, but also partner integration services such as L7 firewalling or agent-less Anti-Virus.

Different options exist for automating application delivery micro-segmentation using vRealize Automation. One method is use security groups and policies representing application tiers that have been pre-configured in NSX. The pre-created policies should only allow inter-tier communication for specific services in between each tier (for example, allow MSSQL between the App and DB tier). When creating a vRA blueprint, you can then attach your application’s workloads to their respective tiers. With this approach you ensure only controlled communication between application tiers is allowed when new applications are deployed from the blueprint.

Another option is to use the App Isolation feature inside of a vRA multi-machine blueprint. This is a simple checkbox that once checked, will ensure that security groups and policies are automatically created for every instance of the application that gets deployed in order to completely isolate this application from any other applications or application instances.

Figure 9: vRealize Automation App Isolation checkbox

Finally, when creating a blueprint, we can choose to use on-demand security groups and rules for each application instance. In this approach, you define security policies in NSX, but don’t assign those to any particular security group yet. When you define a multi-machine blueprint in vRA, you can then attach on-demand security groups to our application tiers and select the relevant security policy. Every time we deploy an application from this blueprint, unique security groups will be created, isolating each application instance from any other instance, while at the same time micro-segmenting each application instance by use of the pre-configured policy.

Day 2 operations for micro-segmentation

Once we have on-boarded our applications and applied the appropriate networking and security controls, we may be required to verify if the correct level of protection has indeed been applied. The vSphere web client provides visibility in to all the firewall rules as well as 3rd party services that have been applied to every workload.

Furthermore, log analytics tools such as vRealize Log Insight with the NSX content pack or the Splunk App for NSX can be used to collect logs on allowed and roped flow and provide visibility into inter- and intra-application flows.

Another option for day 2 micro-segmentation operations is the use of Arkin platform. Arkin provides monitoring, tracking and auditing of security group memberships and effective firewall rules, enabling rapid troubleshooting and compliance. It can generate alerts when inconsistencies occur to ensure the actual implementation remains complaint with the design.

Figure 10: Arkin Events Widget

Arkin also provides a timeline feature, which can for example be used to investigate security group membership or effective firewall policies applied to an application in any point in time, to enable the operations team to quickly identify the cause of issues related to application functionality or compliance, for example an application that is no longer functioning or blocked flows between development and test environments.


Operationalizing NSX starts with determining the appropriate design pattern based on network and security requirements. These design patterns can leverage the Distributed Firewall to control communication as well as overlay-based logical switches, virtual routers and partner service insertion. Determining the appropriate security groups and policies to implement a zero-trust model through micro-segmentation while not impacting the application functionality is crucial. Several practices and solutions exist to make that process easier. vRealize Automation and other Cloud Management Platforms can integrate with NSX to automate application delivery including the appropriate security and networking.


The post Operationalizing Micro-segmentation – NSX Securing “Anywhere” – Part III appeared first on The Network Virtualization Blog.

NSX-V Multi-site Options and Cross-VC NSX Design Guide

The goal of this design guide is to outline several NSX solutions available for multi-site data center connectivity before digging deeper into the details of the Cross-VC NSX multi-site solution.

Learn how Cross-VC NSX enables logical networking and security across multiple vCenter domains/sites and how it provides enhanced solutions for specific use cases.

No longer is logical networking and security constrained to a single vCenter domain. Cross-VC NSX use cases, architecture, functionality, deployment models, design, and failure/recovery scenarios are discussed in detail.
This document is targeted toward virtualization and network architects interested in deploying VMware® NSX Network virtualization solution in a vSphere environment.

The design guide addresses the following topics:

  • Why Multi-site?
  • Traditional Multi-site Challenges
  • Why VMware NSX for Multi-site Data Center Solutions?
  • NSX Multi-site Solution

It also covers the Cross-VC NSX:

  • Use Cases
  • Architecture and Functionality
  • Deployment Models
  • Design Guidance
  • Failure/Recovery scenarios

pfSense 2.3.2-RELEASE Now Available!

We are happy to announce the release of pfSense® software version 2.3.2!

This is a maintenance release in the 2.3.x series, bringing a number of bug fixes. The full list of changes is on the 2.3.2 New Features and Changes page.

This release includes fixes for 60 bugs, 8 features and 2 todo items completed.

If you haven’t yet caught up on the changes in 2.3.x, check out the Features and Highlights video. Past blog posts have covered some of the changes, such as the performance improvements from tryforward, and the webGUI update.

Upgrade Considerations

As always, you can upgrade from any prior version directly to 2.3.2. The Upgrade Guide covers everything you’ll need to know for upgrading in general.  There are a few areas where additional caution should be exercised with this upgrade if upgrading from 2.2.x or an earlier release, all noted in the 2.3 Upgrade Guide.

For those upgrading from a 2.3 beta or RC version who have not yet upgraded to 2.3-RELEASE, please see this post.

Known Regressions

While, nearly all of the common regressions between 2.2.6 and 2.3-RELEASE have been fixed in subsequent releases, the following still exist:

  • IPsec IPComp does not work. This is disabled by default. However in 2.3.1, it is automatically not enabled to avoid encountering this problem. Bug 6167
  • IGMP Proxy does not work with VLAN interfaces, and possibly other edge cases. Bug 6099. This is a little-used component. If you’re not sure what it is, you’re not using it.
  • Those using IPsec and OpenBGPD may have non-functional IPsec unless OpenBGPD is removed. Bug 6223


Compared to pfSense 2.2.x, the list of available packages in pfSense 2.3.x has been significantly trimmed.  We have removed packages that have been deprecated upstream, no longer have an active maintainer, or were never stable. A few have yet to be converted for Bootstrap and may return if converted. See the 2.3 Removed Packages list for details.  pfSense 2.3.2 does bring back ntopng, and the vnstat (traffic totals) package is new.

pfSense software is Open Source

For those who wish to review the source code in full detail, the changes are all publicly available in three repositories on Github. 2.3.2-RELEASE is built from the RELENG_2_3_2 branch of each repository.

Main repository – the web GUI, back end configuration code, and build tools.
FreeBSD source – the source code, with patches of the FreeBSD 10.3 base.
FreeBSD ports – the FreeBSD ports used.


Downloads are available on the mirrors as usual.

Downloads for New Installs and Upgrades to Existing Systems – note it’s usually easier to just use the auto-update functionality, in which case you don’t need to download anything from here. Check the Firmware Updates page for details.

Supporting the Project

Our efforts are made possible by the support our customers and the community. You can support our efforts via one or more of the following.

  • pfSense Store –  official hardware, apparel and pre-loaded USB sticks direct from the source.  Our pre-installed appliances are the fast, easy way to get up and running with a fully-optimized system. All are now shipping with 2.3 release installed.
  • Gold subscription – Immediate access to past hang out recordings as well as the latest version of the book after logging in to the members area.
  • Commercial Support – Purchasing support from us provides you with direct access to the pfSense team.
  • Professional Services – For more involved and complex projects outside the scope of support, our most senior engineers are available under professional services.

Outlook for Mac and public folder access

Exchange Server 2013 introduced modern public folders and also shift in the way clients access the public folders. Ever since, the Outlook for Mac client had limited or no support for public folders.

This article provides an update on how Outlook 2016 for Mac clients can access public folders in various topologies.

Current Scenario

The Outlook for Mac clients could not access public folders if:

Co-existence with legacy public folders

  • Legacy public folders deployed on Exchange Server 2010 SP3 and user mailbox present on Exchange Server 2013/Exchange server 2016 in same organization.

Modern public folder access in Hybrid topology

  • Exchange Server 2013/Exchange Server 2016 in hybrid mode with an Office365 tenant.
    • Scenario1 – Modern PF’s deployed on-premises – on-premises users, with mailbox on Exchange Online, accessing modern public folders deployed in Exchange on-premises.
    • Scenario2 – Modern PF’s deployed in Office 365 – on-premises users, with mailbox on Exchange on-premises as well, accessing modern public folders deployed in Office 365 tenant


The April 2016 update of Outlook 2016 for Mac clients, along with changes in Cumulative Update for Exchange Server, will make public folders in above scenarios work for Outlook 2016 for Mac.

The following table summarizes access state for Outlook 2016 for Mac (post April 2016 update) access to public folder deployments, as well as the minimum Exchange CU required to enable access:


Public folder deployed on User mailbox on E2010 SP3+ User mailbox on E2013 User mailbox on E2016 User mailbox on Office 365 (EXO tenant)
Exchange Server 2010 SP3+ Yes Yes Yes Not supported
Exchange Server 2013 CU13 Not supported Yes Yes Yes
Exchange Server 2016 CU2 Not supported Yes Yes Yes
Office 365 tenant Not supported Yes Yes Yes

We’ll take a look at one of the common scenarios and explain the steps required. TechNet has detailed steps on how to configure public folder access in hybrid as well as for co-existence scenarios.


EXO users accessing modern public folders hosted in on-premises organization.


Make sure all of the following pre-requisites are met before making any changes to your configuration:

Client side:

Make sure the Outlook 2016 for Mac client is installed with the April 2016 update, at the minimum. It’s recommended to install the latest available update.

Server side:

1) Exchange organization is configured in hybrid with Office 365 tenant, and DirSync is working and the mail sync script is performed.

2) The on-premises Exchange Servers, hosting public folder mailbox must be on or above:

  1. Exchange Server 2013 CU13
  2. Exchange Server 2016 CU2

3) Make sure the user that will be accessing public folders has a user account on-premises and mailbox in EXO.

You should be able to see the user listed in Get-RemoteMailbox:




Pure EXO user mailboxes cannot access public folders hosted on-premises.

4) Make sure the PF mailbox from on-premises shows as Mail User:



Office 365 tenant:



1) Configure public folder access settings at EXO Tenant:


Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes <Mail User representing on-premises PF>


Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes OP1

2) The EXO User mailboxes are automatically assigned with DefaultPublicFolderMailbox:


That’s it! Now Outlook 2016 for Mac can subscribe and access public folders.

Related articles

Detailed configuration steps for each scenario are in following articles:

Co-existence with legacy public folders

User mailboxes are on Exchange 2013/Exchange 2016 servers accessing legacy public folders on Exchange Server 2010


Hybrid topology

Scenario1 – modern PF’s deployed on-premises

On-premises users, with mailbox on Exchange Online, accessing modern public folders deployed Exchange on-premises:


Scenario2 – modern PF’s deployed in Office 365

On-premises users, with mailbox on Exchange on-premises as well, accessing modern public folders deployed in an Office 365 tenant:


Public Folder Team