New 3PAR 9450 offers higher performance & scale for the mid-range

A few weeks ago, HPE introduced a brand new 9450 all-flash model of its 3PAR storage arrays leading up to HPE Discover and the company showed off the new arrays during the event.  The 9450 joins the current offering of 8000 and 20000 series arrays and is the first in a new 9000 series of arrays.  The new model is targeted at the mid-range, the same as the 8000 series arrays, with up to 4 controllers in an active mesh configuration.   HPE is quick to say that the 9450 is not a replacement for the 8450 all-flash model, positioning the new array for customers who want larger scale, the ability to utilize NVMe and higher performance.

The new 9450 is an all-flash offering and HPE is saying that it will deliver double the scale and double the horsepower of its 8450 with the 9450 array.   The 9450 is based on the same HPE Gen5 ASICs that power the 8000 series arrays but will offer greater scale for the processors that power the array and more DRAM for operation.  In terms of resiliency, HPE says the new array boasts six 9’s availability.

Digging into the announcement, what are the highlights?

  • Scale to 18 PB of usable all-flash capacity on 6PB raw capacity (standard 3 to 1 efficiencies)
  • 1.8 million IOPs at sub millisecond latency (70% increase)
  • Up to 34 Gb/s in bandwidth
  • Storage Class Memory and NVMe ready
  • 80 host ports, more than 3x more than the 8450

Details on the processors is not yet available, so an assumption can be made that it will be the new Intel processors set to release in the coming weeks.

Density Differences

 

 

The 3PAR StoreServ 9450 utilizes the same controller design as the 20000 series arrays.  This 8U chassis for its 4 controllers lacks any SSD drive bays, meaning that the 8000 series models with controllers integrated into a drive shelf can accommodate more SSD drives in less rack units.  The trade off is the ability to accommodate NVMe storage in the controller on the 9450 which the 8000 series controllers do no have space to accommodate, meaning in the 9450, you can get some storage in the controllers.

With the new 9000 series and 20000 series controllers, they’re a 2U size, meaning more room for processors, memory and half-height PCI cards.  All of this equates to the ability to push the high-end limits of the mid-range 3PAR higher than ever.

 

Fave Products: Anker’s PowerCore Fusion 5000 2-in-1 is the best of both worlds

Anker makes a lot of my favorite accessories. I recently picked up a Anker PowerCore Fusion 5000 2-in-1 charger.  It is both a dual-port USB wall charger and a portable battery.

Ever since I lost the power cable to my first Anker was a dual-port charger (making it a useless paperweight), I have found myself favoring models that use standard cables.  The integrated AC plug makes it is even better since I have nothing to lose.  I don’t need a separate charger or USB port to charge the battery, however, if I needed to charge it from my laptop, there is a micro-USB input, so you have your choice.

This month, I have a lot of travel planned and this PowerCore Fusion is the perfect travel accessory.  Sometimes, it’s hard to find available AC plugs in many hotels.  There’s usually one next to the bed, but as I have more and more devices to charge (iPhone, iPad, Apple Watch, laptop or two) and then add more if I’m traveling with family.   I’ve traveled with a Belkin mini-surge protector for the same reason, but many times, I don’t need it anymore with this new Anker battery.  For days when I have more than usual iPhone use or when I have less than great cellular coverage,

In terms of using the battery, I find the weight to be a really good balance – both poweful and not too heavy.  It can provide a couple full charges on my iPhone and a really great boost on the iPad.

Speaking of double duty, I also picked up an Anker 6-port USB charger for the office to give me ample power ports – high powered ports – to keep any and all of my devices (and my co-workers) charged up.  It has a two-prong power cord and with 6′ of length, it can easily route down and to the nearest wall plug.

This works exceptionally well at my work desk – to charge the iPhone, occasionally an iPad, my AirPods and anything else that needs USB power.  I personally hate to plug in my devices to charge on my USB ports for my work laptop – since a phone could bridge and cause network issues for me.  I also keep my laptop up high on a shelf which means those USB lines would be hanging down.

Deep Dive: How Hybrid Authentication Really Works

A hybrid deployment offers organizations the ability to extend the feature-rich experience and administrative control they have with their existing on-premises Microsoft Exchange organization to the cloud. A hybrid deployment provides the seamless look and feel of a single Exchange organization between an on-premises Exchange organization and Exchange Online in Microsoft Office 365. In addition, a hybrid deployment can serve as an intermediate step to moving completely to an Exchange Online organization.

But one of the challenges some customers are concerned about is that this type of deployment requires that some communication take place between Exchange Online and Exchange on-premises. This communication takes place over the Internet and so this traffic must pass through the on-premises company firewall to reach Exchange on-premises.

The aim of this post is to explain in more detail how this server to server communication works, and to help the reader understand what risks this poses, how these connections are secured and authenticated, and what network controls can be used to restrict or monitor this traffic.

The first thing to do is to get some basic terminology clear. With the help of TechNet and other resources, here are some basic definitions;

  • Azure Authentication Service – The Azure Active Directory (AD) authentication Service is a free cloud-based service that acts as the trust broker between your on-premises Exchange organization and the Exchange Online organization. On-premises organizations configuring a hybrid deployment must have a federation trust with the Azure AD authentication service. You may have heard of this referred to previously as the Microsoft Federation Gateway, and while speaking purely technically the two are quite different, they are different implementations of essentially what is the same thing. So, to avoid confusion, we shall refer to both as the Azure Active Directory (AD) authentication Service, or Azure Auth Service for short.
  • Federation trust – Both the on-premises and Office 365 service organizations need to have a federation trust established with the Azure AD authentication service. A federation trust is a one-to-one relationship with the Azure AD authentication service that defines parameters and authentication statements applicable to your Exchange organization.
  • Organization relationships – Organization relationships are needed for both the on-premises and Exchange Online organization and are configured automatically by the Hybrid Configuration Wizard. An organization relationship defines features and settings that are available to the relationship, such as whether free/busy sharing is allowed.
  • Delegated Auth (DAuth) – Delegated authentication occurs when a network service accepts a request from a user and can obtain a token to act on behalf of that user to initiate a new connection to a second network service.
  • Open Authorization (OAuth) – OAuth is an authorization protocol – or in other words, a set of rules – that allows a third-party website or application to access a user’s data without the user needing to share login credentials.

A History Lesson

Exchange has had a few different solutions for enabling inter-organization connectivity, which is essentially what a hybrid deployment is; two physically different Exchange orgs (on-premises and Exchange Online) appearing to work as one logical org to the users.

One of the most common uses of this connectivity is to provide users the ability to share free/busy information, so that’s going to be the focus of the descriptions used here. Of course, hybrid also allows users to send secure email to each other, but that rarely seems to come up as a concern as every org let’s SMTP flow in and out without much heartache, so we won’t be digging into that here. There are other features you get with hybrid, such as MailTips, but these use the same underlying protocol flow as Free/Busy, so if you know how Free/Busy works, you know how they work too.

So, one of the first cross-premises features we released was cross-forest availability. If the two forests did not have a trust relationship then each admin created a service account, gave that service account permissions to objects in their own forest (calendars in this case), and then gave those credentials to the other organization’s admins. If the forests were trusted each admin would instead give permissions to the Client Access Servers from the remote forest to read Free/Busy in their own forest.

Each org admin would then add an Availability Address space object to their own Exchange org, with the SMTP domain details for the other forest, and in the case of this being the untrusted forest case, provide the pre-determined creds for that forest. The admins also had to sync directories between the orgs (or import contacts for users in the remote forest) too. That was a hassle. But, once they did that, lookups for users who had a contact object in the forest triggered Exchange to look at the cross-forest availability config, and then use the previously obtained credentials or server permissions to make a call to the remote forest to request free/busy information.

The diagram below shows this at a high level for the untrusted forest version of this configuration.

hybridauth1

Clearly there were some shortcomings with this approach. Directory sync is a big requirement for most organizations, credentials had to be exchanged and managed. Connections were directly from server to server, AutoDiscover had to be set up and working as it was used to find the correct EWS endpoints in the remote org, but one thing some customers liked was that these connections could be pre-authenticated with an application layer firewall (TMG back in the day was very popular) as creds were used in a Basic handshake, encrypted by SSL.

These shortcomings lead us to design a new approach, that allowed two servers to talk to each other securely without having to exchange credentials or perform a full directory sync.

DAuth

Exchange 2010 and later versions of Exchange were built to use this thing called the Azure Auth Service, an identity service that runs in the cloud, to be used as a trust broker for federating organizations, enabling them to share information with other organizations.

Exchange organizations wanting to use federation establish a one-time federation trust with the Azure Auth Service, allowing it to become a federation partner to the Exchange organization. This trust allows servers, on behalf of users authenticated by Active Directory (the identity provider for on-premises users) to be issued Security Assertion Markup Language (SAML) On-Behalf-Of Access Tokens by the Azure Auth Service. These On-Behalf-Of Access Tokens allow users from one federated organization to be trusted by another federated organization. The Organization Relationship or sharing policy that must also be set up governs the level of access partner users have to the organization’s resources.

With the Azure Auth Service acting as the trust broker, organizations aren’t required to establish multiple individual trust relationships with other organizations and can instead do the one-time trust, or Federation configuration, and then establish Organization Relationships with each partner organization.

The trust is established by submitting the organization’s public key certificate (this certificate is created automatically by the cmdlet used to create the trust) to the Azure Auth Service and downloading the Azure Auth Service’s public key. A unique application identifier (ApplicationUri) is automatically generated for the new Exchange organization and provided in the output of the New Federation Trust wizard or the New-FederationTrust cmdlet. The ApplicationUri is used by the Azure Auth Service to identify your Exchange organization.

This configuration allows an Exchange Server to request an On-Behalf-Of Access Token for a user for the purposes of making an authenticated request to an Exchange Server in a different organization (a partner, or perhaps an Exchange Server hosted in Office 365 in the case of hybrid), by referencing their ApplicationUri.

When the on-premises admin then adds an organization relationship for a partner org, Exchange reaches across to the remote Exchange Organization anonymously to the /AutoDiscover/AutoDiscover.svc end-point using the “GetFederationInformation” method to read back relevant information such as the Federated domains list, their ApplicationUri, etc.

Here’s an example of the entry in the cloud, for Contoso’s hybrid Exchange deployment. You can see we know the AutoDiscover endpoint in the on-premises Exchange organization based on this, and what can be done with this agreement.

DomainNames : {contoso.com}
FreeBusyAccessEnabled : True
FreeBusyAccessLevel : LimitedDetails
FreeBusyAccessScope :
MailboxMoveEnabled : False
MailboxMoveDirection : None
DeliveryReportEnabled : True
MailTipsAccessEnabled : True
MailTipsAccessLevel : All
MailTipsAccessScope :
PhotosEnabled : False
TargetApplicationUri : FYDIBOHF25SPDLT.contoso.com
TargetSharingEpr :
TargetOwaURL : https://mail.contoso.com/owa
TargetAutodiscoverEpr: https://autodiscover.contoso.com/autodiscover/autodiscover.svc/WSSecurity

And the same command when run on-premises results in pretty much the same information with the notable differences seen here:

TargetApplicationUri : outlook.com
TargetOwaURL : http://outlook.com/owa/contoso.onmicrosoft.com
TargetAutodiscoverEpr : https://autodiscover-s.outlook.com/autodiscover/autodiscover.svc/WSSecurity

Now when a user (Mary in our picture below) in Contoso’s On-Premises Exchange environment requests free/busy for a user (Joe) in Contoso’s online tenant, (or for a partner org for which there is an organization relationship, this flow works the same), here’s what happens.

hybridauth2

  1. The on-premises contoso.com Exchange Server determines that target user is external and does a lookup for the Organizational Relationship details to find where the send the request.
  2. The on-premises contoso.com Exchange Server submits a token request to the Azure Auth Service for an On-Behalf-Of Access Token for contoso.onmicrosoft.com, referencing contoso.microsoft.com’s ApplicationUri (which of course it knows because of the creation of the Org Relationship), the SMTP address of the requesting user, and the purpose/intent of the request (Free/Busy in this case). This request is encrypted using the Azure Auth Service’s public key and signed using the on-premises organization’s private key, thereby proving where the request is coming from.
  3. The Azure Auth Service returns an On-Behalf-Of Access Token to the server in contoso.com, signed with its own Private Key (to prove where it came from) and the On-Behalf-Of Access Token in the payload is encrypted using the public key of contoso.microsoft.com (which Azure Auth has because contoso.microsoft.com provided it when they set up their own Federation Trust.
  4. The on-premises contoso.com Exchange Server then submits that token as a SOAP request to contoso.onmicrosoft.com’s AutoDiscover AutoDiscover/AutoDiscover.svc/wsssecurity endpoint (which it had stored in its Org Relationship config for the partner. The connection is anonymous at the HTTP/network layer, but conforms to WS-Security norms (see References at the end of this document for details on Ws-Security). Note: This step is ignored if TargetSharingEPR is set on the Org Relationship object as that specifies explicitly the EWS endpoint for the target Org.
  5. The contoso.onmicrosoft.com Exchange Server validates the signed and encrypted request (this is done at the Windows layer using the Windows Communication Framework (WCF) – Exchange just passes to the WCF layer (telling it about its keys and issuer information it has based on the setup of the federation trust) and then assuming it passes the WCF sniff test contoso.onmicrosoft.com’s Exchange Server returns the EWS URL for the Free/Busy request to be submitted to. (Don’t forget that only the Exchange Servers in contoso.microsoft.com have the necessary private key to decrypt the auth token to understand what it really is).
  6. The request and auth token is then submitted directly from Exchange in contoso.com to the EWS endpoint of Exchange in contoso.onmicrosoft.com.
  7. We do the same validation of the signed and encrypted request we did before as it’s now hitting a different endpoint on Exchange in contoso.onmicrosoft.com, once done the server sees that this is a free/busy request from contoso.com (again based on ApplicationUri, contained within the token).
  8. The Exchange Server in contoso.onmicrosoft.com extracts the e-mail address of the requesting user, splits-up the user from the domain part, and checks the latter against its domain authorization table (based on the Org Relationships configured in the org) if this domain can receive the requested free/busy information. These requests are allowed/denied on a per-domain basis only – if the domain of the requesting user is contained in the Org Relationship then it’s ok to return Free/Busy and only Default calendar permissions are evaluated.
  9. The server in contoso.onmicrosoft.com responds by providing the free/busy data. Or not. If it wasn’t authorized to do so.
  10. The on-premises contoso.com server returns the result to the requesting client.

What do you need to allow in through the firewall for this to work then? You need to allow inbound TCP443 connections to /autodiscover/autodiscover.svc/* and to/ews/* for the actual requests.

This is key – only the receiving Exchange server has the cert required for decrypting the On-Behalf-Of Access Token, so while you might be ok to unpack the TLS for the connection itself on a load balancer or firewall, the token within it is still encrypted to protect it from man in the middle attacks. If you were to install the private key and some smarts on a firewall device, you could open it but all you’d see is a token with values that only make sense to Exchange (the values agreed upon during creation of the Federation Trust). So if you want to verify this token really did come from Azure Auth Service, all you really need to do is verify the digital signature to ensure it was signed by the Azure Auth Service. When a message is signed, it is nearly impossible to tamper with the message but message signing alone does not protect the message content itself from being seen. Using the signature, the receiver of the SOAP message can know that the signed elements have not changed en route. Anything more than that, such as decrypting the inner token would require an awful lot of Exchange specific information, which might lead you to conclude the best place to do this is Exchange.

Now onto OAuth

So firstly, why did we move away from DAuth and switch to using OAuth?

Essentially, we made some architectural changes in the Azure Auth Service and WCF was falling out of favor and not the direction Microsoft was using as the framework for service-orientated applications. We had built something that was quite custom, and wanted to move to a more open-standards based model. OAuth is that.

So how does OAuth work at a high level?

At a high-level OAuth uses the same Trust Broker concept as DAuth, each Exchange organization trusts the Azure Auth Service, and tokens from that service are used to authorize requests, proving their authenticity.

There are several noteworthy differences between DAuth and OAuth.

The first is that OAuth provides the ability to allow a server with the resource being requested to redirect the client (or server) requesting the data to the trusted issuer of access tokens. It does this when the calling server or client sends an anonymous call with an empty value in the HTTP Bearer header – this is what tells the receiving server that the client supports OAuth, triggering the redirection response, sending the client to the server that can issue access tokens.

The second thing to note is that the Exchange implementation of OAuth for Server to Server Auth we call S2S OAuth 2.0 and we have documented it in detail here. This document explains a lot of detail about what is contained in the token, so if you’re interested, that’s the document to snuggle up with. As you’ll see we don’t use this redirection mentioned above for our server to server hybrid traffic but it’s good to know it’s there as it helps understand OAuth more broadly.

Here’s an extract directly from the protocol specification (linked to later in this document) which provides a great example of OAuth in practice. In this example, this is the response received when one server tries to access a resource on another server in the same hybrid org.

HTTP/1.1 401 Unauthorized
Server: Fabrikam/7.5
request-id: 443ce338-377a-4c16-b6bc-c169a75f7b00
X-FEServer: DUXYI01CA101
WWW-Authenticate: Bearer client_id=”00000002-0000-0ff1-ce00-000000000000″, trusted_issuers=”00000001-0000-0000-c000-000000000000@*”
WWW-Authenticate: Basic Realm=””
X-Powered-By: ASP.NET
Date: Thu, 19 Apr 2012 17:04:16 GMT
Content-Length: 0

Following this response, the requesting server then sends its credentials to the indicated token issuer in the response above (trusted_issuers=”00000001-0000-0000-c000-000000000000@*”), which is an endpoint it knows about because it too has an AuthServer object with that same id. That token broker authenticates the client and issues access and refresh tokens to the requestor. Then the requestor uses the access token to access the resource it requested on the server.

Below is an example of this, from the same specification document. In this example, the requestor went to the Trusted Issuer referred to in the example above, and that issuer authenticated the requestor and issued an access token for the server allowing it to request the data. The requestor then would use this token to access the resource it originally requested on the remote server.

This is example of a JWT actor (JSON Web Token) token issued by an STS. For more information about the claim values contained in this security token, see section 2.2 of the specification document.

actor:
{
“typ”:”JWT”,
“alg”:”RS256″,
“x5t”:”XqrnFEfsS55_vMBpHvF0pTnqeaM”
}.{
“aud”:”00000002-0000-0ff1-ce00-000000000000/contoso.com@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“iss”:”00000001-0000-0000-c000-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“nbf”:”1323380070″,
“exp”:”1323383670″,
“nameid”:”00000002-0000-0ff1-ce00-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“identityprovider”:”00000001-0000-0000-c000-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″
}

Back to differences between DAuth and OAuth – A notable difference between the two is that OAuth tokens are not encrypted. The token is also passed as header information, not as part of the body. There is therefore a reliance upon SSL/TLS (hereafter just referred to as TLS) to protect the traffic in transport.

And the last thing to note is that we only use this flow for on-premises to Exchange Online (and vice-versa) relationships; this isn’t something we use for partner to partner relationships. So if you are hybrid with Exchange Online and have Partner to Partner Org Relationships too, you are using both DAuth and OAuth.

So how does OAuth work in the context of Exchange hybrid. Let’s start with what’s needed to set up the relationship to support this flow. The steps are in https://technet.microsoft.com/en-us/library/dn594521(v=exchg.150).aspx but all of this is now automatically performed in the newest versions of the Hybrid Configuration Wizard (HCW) – so even though that’s the only right way to do this, we’re just going to walk through what the wizard does so we understand what is really going on.

The HCW first adds a new AuthServer object to the on-premises AD/Exchange Org specifying the Azure OAuth Service endpoint to use. The AuthServer object is the OAuth equivalent of the Federation Trust object and it stores such things as the thumbprint of the Azure Auth Service’s signing cert, the token issuing endpoint, the AuthMetaDataUrl (which is where the information all comes from anyway, so that’s kind of a circular reference, isn’t it) and so on.

The HCW process creates a self-signed authorization certificate, the public key of which is passed to the Azure Auth Service and will be used by the Azure Auth Service to verify that token requests from the org are authentic. This and the on-premises AppID and other relevant information are stored in the AuthConfig object. This is the OAuth equivalent of the FederationTrust object we had in DAuth.

The HCW registers the well-known AppID for Exchange on-premises, the certificate details and all the on-premises URL’s Exchange Online might use for the connection as Service Principal Names in Azure Auth Service. This is simply telling Azure Auth Service that Exchange Online may request a token for those URL’s and that AppID, which prevents tokens for any arbitrary URL being requested. Exchange Online’s URL’s are managed automatically with Azure Auth Service, so there’s no need for the admin to add any URL’s for Exchange Online. Having both Exchange Online and On-Premises use the same AppID is part of the magic why from an auth point of view there is no difference between both environments for the Exchange Servers within them.

Then the HCW creates the IntraOrganizationConnector object, specifying the domains in the other organization and the DiscoveryEndpoint AutoDiscover URL used to reach them.

Note the name of this object, Intra…, this is for the connection between on-premises Exchange and Exchange Online for the same customer. This is not something for partner to partner communication.

So, we’re set up – how does it work when someone wants to go look at the free/busy of someone on the other side of that hybrid relationship?

hybridauth3

  1. Mary on-premises makes a Free/Busy request for Joe, a user in the contoso.onmicrosoft.com tenant.
  2. The on-premises Exchange Server determines that target user is external and does a lookup for an IntraOrganizationConnector to get the AutoDiscover endpoint for the external contoso.onmicrosoft.com organization (matching on SMTP domain).
  3. The on-premises Exchange Server makes an anonymous request to that AutoDiscover endpoint and the server responds with a 401 challenge, containing the ID for the trusted issuer from which it will accept tokens.
  4. The on-premises Exchange Server requests an Application Token from Azure Auth Service (the trusted issuer)Key: This Token is for Exchange@Contoso.com and can be cached. If another user on-premises does a Free/Busy request for the same external organization there is no round-trip to AAD, the cached token is used.
    1. It does this by sending a self-issued JSON (JWT) security token, asserting its identity and signed with its private key. The security token request contains the aud, iss, nameid, nbf, exp claims. The request also includes a resource parameter and a realm parameter. The value of the resource parameter is the Uniform Resource Identifier (URI) of the server.
    2. Azure Auth Service validates this request using the public key of the security token provided by the client.
    3. Azure Auth Service then responds to the client with a server-to-server security token that is signed with Azure Auth Service’s private key. The security token contains the aud, iss, nameid, nbf, exp, and identityprovider claims
  5. The on-premises Exchange Server then performs an AutoDiscover request using this token and retrieves the EWS endpoint for the target organization.
  6. The on-premises server then goes back to step 5 to request a token for the new audience URI, the EWS endpoint (unless this happens to be one and the same, which it will never be for Exchange Online users, but might be for on-premises users).
  7. The on-premises server then submits that new token to the EWS end point requesting the Free/Busy.
  8. Exchange Online authenticates the Access Token by lookup of the Application Identity and validates the server-to-server security token by checking the values of the aud, iss, and exp claims and the signature of the token using the public key of the Azure Auth Service.
  9. Exchange Online verifies that Mary is allowed to see Joe’s Free/Busy. Unlike DAuth, OAuth allows granular calendar permissions as the identity of the requesting user not just the domain is available to Exchange and so all permissions are evaluated.
  10. Free/Busy info is returned to the client.

What do you need to allow in through the firewall for this flow to work then? You need to allow TCP443 inbound connections to /autodiscover/autodiscover.svc/* for AutoDiscover to work correctly and to/ews/* for the actual requests.

Tokens are signed and so they cannot be modified – i.e. the audienceURI cannot be changed by some man-in-the-middle without invalidating the signing. But as the tokens exist in the clear in the packet header they could be copied and used by someone else against the same endpoint if they have access to them, which is why end to end TLS is key, and why only trusted devices should be able to perform TLS decryption/re-encryption.

So just as with DAuth, if you want to put a device between Exchange on-premises and Exchange Online you have some things to consider. You can do TLS termination if you want to, and if you wanted to verify the signing of the tokens to confirm it came from the Azure Auth Service you could do that too, but there’s not much else you can do to the traffic without breaking it (and you need to be careful to protect it as the token could be re-used but of course only against the original audienceuri, to change that parameter or any of the content would invalidate the digital signature). You can still restrict source IP address ranges at the network layer if you want to, but given that if you manage your servers properly such that only Exchange Online has the public key used for the signing of tokens, you are safe to assume that a properly signed token came from only one place. So, manage the security of the certificates on your Exchange Servers and trust that Exchange won’t do anything with a modified or incorrectly signed token other than reject it.

What about mailbox moves?

Another type of traffic that can take place between Exchange Online and Exchange on-premises is a mailbox move – and that’s the one type of traffic that does not follow the flows described above.

The Mailbox Replication Service (MRS) is used for the migration of mailboxes between on-premises and Exchange Online. When the admin creates the required migration endpoint to enable this feature, he must provide credentials of a user with permission to invoke the MRS moves – those credentials are used in the connection attempt to on-premises, which is TLS secured and uses NTLM Auth. So, you can use pre-auth for that connection to /ews/mrsproxy.svc, and because NTLM is used the credentials never go over the wire.

Hopefully that has cleared up quite a few of the questions we usually get, but just in case that’s all a bit tldr:, here’s the short(er) version:

How do we know the traffic is from Exchange Online? Can it be spoofed?

It can only be spoofed if the certificates used to sign (and in the case of DAuth, encrypt) the traffic are compromised. So, that’s why it’s vital to secure your servers and admin accounts using well documented processes. If your servers or admins are compromised, the doors are wide open to all kinds of things.

Again, to re-iterate, in DAuth the access tokens are encrypted as well as signed, so the token itself can’t be read without the correct private key, but with OAuth it can, but if the signature is valid, then we know where the traffic came from.

Can I scope the traffic so only users from my tenant can use this communication path?

Users from your tenant aren’t using this server to server communication – it’s Exchange Online and Exchange on-premises using it, performing actions on behalf of the users. So, can you scope it to just those servers? We do document the namespaces and IP address ranges these requests will be coming from here, but given what we’ve covered in this article, we now know Exchange can tell if the traffic is authentic or not and won’t do anything with traffic it can’t trust (we put our money where our mouth is on this, imagine how many Exchange Servers we have in Exchange Online, with no source IP scoping possible, so how many connections we handle, every minute of every day – that’s why we have to write and rely on secure code to protect us – and that same code exists in Exchange on-premises, assuming you keep it up to date).

Can I pre-authenticate the traffic? Can I check the tokens validity against some endpoint?

You can’t pre-authenticate the traffic using HTTP headers as you would Outlook or ActiveSync as the auth isn’t done that way. The authentication is provided by proving the authenticity of the request’s signing. If we think about authentication as being about proving who someone is, the digital signing itself proves who is making the request. Only the person in possession of the private key used to sign the traffic can only sign the requests. So we validate, and thereby authenticate the requests received from your on-premises servers coming in to Exchange Online because we know (and trust) only you have the private key used to sign them. Azure Auth Service looks after the private key it uses to sign our requests (very carefully as you might expect). Can you verify the signing? To directly quote this terrific blog post “signature verification key and issuer ID value are often available as part of some advertising mechanism supported by the authority, such as metadata & discovery documents. In practice, that means that you often don’t need to specify both values – as long as your validation software know how to get to the metadata of your authority, it will have access to the key and issuer ID values.” So, you can verify the signing is good, and you could potentially also choose to additionally validate;

  1. That the token is a valid JWT
  2. That the iss claim (in the signed actor token) is correct – this is a well-known GUID @ tenant ID
  3. Checking the actor is Exchange (AppId claim) – this is also a well-known appID value @ tenant ID

Can I use Multi-Factor Auth (MFA) to secure this traffic? My security policy says I must enforce MFA on anything coming in front the Internet

Let’s first agree upon the definition of MFA, as that’s a term people throw around a lot, often incorrectly. MFA is a security mechanism or system that requires the caller provide more than one form of authentication, and they must come from different providers to verify their identity. For example, credentials and a certificate, a certificate and a fingerprint and so on. Another way to describe MFA is with a set of three attributes: something you know, something you have and something you are. Something you know – a password, something you have, a certificate, something you are – a fingerprint.

So, now we know MFA is a general term used to describe how one party authenticates to another, and isn’t an actual ‘thing’ you can configure, let’s look at the hybrid traffic with it in mind.

In both DAuth and OAuth the digital signing addresses the something you have aspect, the signing can only have been done by Azure Auth Service, as it’s the only possessor of the private key used for the signing.

The something you are attribute isn’t something the flow can provide, Azure Auth isn’t a person with fingers or DNA, but the something you know is arguably what Exchange Online puts in the request, the claims in an OAuth token, the key values and attributes within a DAuth token. So one could make a case that this traffic already uses MFA. This might not be the kind of MFA your security guy can buy as an off-the-shelf solution with a token keyfob, but if you get back to what MFA is, not how it compares to a solution for client to server traffic, you’ll have a more meaningful conversation.

Can I SSL terminate the connection and inspect it and then re-encrypt it?

Yes you can terminate the SSL/TLS but ‘inspecting it’ is potentially a can of worms if ‘inspecting’ results in ‘modifying’ it. You can’t inspect a DAuth token without decrypting it, and what exactly are we inspecting it for? To check the issuer, the audience and so on are correct? Ok, let’s do that, but if the signing is still intact then they must be correct. All you need to do is verify the signature matches that from Azure Auth Service. If you can do that, you don’t need to inspect content, as they will be valid. Whatever happens you don’t want to tinker with the headers, or you’ll invalidate the signature, and then Exchange (or more precisely, Windows) will reject it.

Are these connections anonymous? Authenticated? Authorized?

As previously explained, the traffic does not carry authentication headers as such but instead is authenticated using digital signing of the requests, and the authorization is done by the code on the server receiving the request. Bob is asking to see Mary’s free/busy – can he? Yes or no. That’s authorization.

Are any of these connections or requests insecure or untrustworthy?

Microsoft does not consider any of the flows discussed in this article to be insecure at all. We were very diligent when designing and implementing them to make sure we secure the traffic and the tokens using all available means, and we’re only documenting this in detail in this article to clear up any doubts and to try and fully explain why it’s secure and trustworthy to configure Exchange hybrid.

How do we prevent token replay? Token modification?

Token replay is potentially possible with any token based authentication and authorization system – as the token is being used in place of credentials at the time of accessing a resource. DAuth has an advantage in this space as the tokens are encrypted, but the general principle for any authentication scheme like this is to protect any and all tokens from interception and mis-use, and there’s where TLS comes in, and only allowing termination of TLS on devices you trust. And not allowing man-in-the-middle attacks or allowing them to happen by configuring computers or teaching users to ignore certificate warnings.

How do I know if I’m using DAuth or OAuth and can I choose which to use?

Exchange will always try OAuth first by looking to see if there is an enabled IntraOrganizationConnector present with the domain name of the target user for any request. Only if no such connector exists, or if there is one but it is disabled, would we then look for the Domain name in an OrgRelationship. And if there isn’t one of them, we will then start to look for the domain name in the Availability Address Space configuration.

Remember OAuth is only for on-premises <-> Exchange Online users, so you might very well end up with both being used if you are both hybrid with Exchange Online and have partner relationships with other organizations.

Know this though, the HCW will always try to enable OAuth in your org if it can, because we want to try and get our customers to use OAuth if we can for reasons previously explained. If you disable the IntraOrganizationConnector and then re-run HCW, it will get re-enabled if your topology can support it.

Well done for making it this far. I hope you found this useful, if not today then at some point when you are having to explain to some security guy why it’s ok to go hybrid.

Please do provide any comments or ask questions if you want to, and if you want to read more here’s a list of articles I found helpful while putting this together.

References

Particular thanks for helping with this article go to Matthias Leibmann and Timothy Heeney for making sure it was technically accurate, and to numerous others who helped it make sense and mostly correct grammar.

Greg Taylor
Principal PM Manager
Office 365 Customer Experience

Announcing Original Folder Item Recovery

Cumulative Update 6 (CU6) for Exchange Server 2016 will be released soonTM, but before that happens, I wanted to make you aware of a behavior change in item recovery that is shipping in CU6.  Hopefully this information will aid you in your planning, testing, and deployment of CU6.

Item Recovery

Prior to Exchange 2010, we had the Dumpster 1.0, which was essentially a view stored per folder. Items in the dumpster stayed in the folder where they were soft-deleted (shift-delete or delete from Deleted Items) and were stamped with the ptagDeletedOnFlag flag. These items were special-cased in the store to be excluded from normal Outlook views and quotas. This design also meant that when a user wanted to recover the item, it was restored to its original folder.

With Exchange 2010, we moved away from Dumpster 1.0 and replaced it with the Recoverable Items folder. I discussed the details of that architectural shift in the article, Single Item Recovery in Exchange 2010. The Recoverable Items architecture created several benefits: deleted items moved with the mailbox, deleted items were indexable and discoverable, and facilitated both short-term and long-term data preservation scenarios.

As a reminder, the following actions can be performed by a user:

  • A user can perform a soft-delete operation where the item is deleted from an Inbox folder and moved to the Deleted Items folder. The Deleted Items folder can be emptied either manually by the user, or automatically via a Retention Policy. When data is removed from the Deleted Items folder, it is placed in the Recoverable Items\Deletions folder.
  • A user can perform a hard-delete operation where the item is deleted from an Inbox folder and moved to the Recoverable Items\Deletions folder, bypassing the Deleted Items folder entirely.
  • A user can recover items stored in the Recoverable Items\Deletions folder via recovery options in Outlook for Windows and Outlook on the web

However, this architecture has a drawback – items cannot be recovered to their original folders.

Many of you have voiced your concerns around this limitation in the Recoverable Items architecture, through various feedback mechanisms, like at Ignite 2015 in Chicago where we had a panel that included the Mailbox Intelligence team (those who own backup, HA, DR, search, etc.). Due to your overwhelming feedback, I am pleased to announce that beginning with Exchange 2016 CU6, items can be recovered to their original folders!

How does it work?

  1. When an item is deleted (soft-delete or hard-delete) it is stamped with the LastActiveParentEntryID (LAPEID) MAPI property (property ID 348A). By using the folder ID, it does not matter if the folder is moved in the mailbox’s hierarchy or renamed.
  2. When the user attempts a recovery action, the LAPEID is used as the move destination endpoint.

The LAPEID stamping mechanism has been in place since Exchange 2016 Cumulative Update 1. This means that as soon as you install CU6, your users can recover items to their original folders!

Soft-Deletion:

ItemRecovery

 

Hard-Deletion

ItemHardRecovery

Are there limitations?

Yes, there are limitations.

First, to use this functionality, the user’s mailbox must be on a Mailbox server that has CU6 installed. The user must also use Outlook on the web to recover to the original folder; neither Outlook for Windows or Outlook for Mac support this functionality, today.

If an item does not have an LAPEID stamped, then the item will be recovered to its folder type origin – Inbox for mail items, Calendar for calendar items, Contacts for contact items, and Tasks for task items. How could an item not have an LAPEID? Well, if the item was deleted before CU1 was installed, it won’t have an LAPEID.

And lastly, this feature does not recover deleted folders. It only recovers items to folders that still exist within the user’s mailbox hierarchy. Once a folder is deleted, recovery will be to the folder type origin for that item.

Summary

We hope you can take advantage of this long sought-after feature. We continue to look at ways we can improve user recovery actions and minimize the need for third-party backup solutions. If you have questions, please let us know.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

.NET Framework 4.7 and Exchange Server

Update 6/15/2017: Added a clarification that .Net Framework 4.7 has shipped and that we are still validating this release with Exchange Server.

We wanted to post a quick note to call out that our friends in .NET have released the .NET Framework 4.7 to Windows Update for client and server operating systems it supports.

We are in the process of validating Exchange Server on the .NET Framework 4.7, but the work is not yet complete. We will be sure to release additional information and update the Exchange supportability matrix when .NET Framework 4.7 is supported with Exchange Server. We are working with the .NET team to ensure that Exchange customers have a smooth transition to .NET Framework 4.7, but in the meantime, please delay this particular .NET update on your Exchange servers. Information on how this block can be accomplished can be found in article 4024204, How to temporarily block the installation of the .NET Framework 4.7.

It’s too late, I installed it. What do I do now?

If .NET Framework 4.7 was already installed, we recommend you back to .NET Framework 4.6.2 Here are the steps:

Note: These instructions assume you are running the latest Exchange 2016 Cumulative Update or the latest Exchange 2013 Cumulative Update as well as .NET Framework 4.6.2 prior to the upgrade to .NET Framework 4.7 at the time this article was drafted. If you were running a version of .NET Framework other than 4.6.2 or an older version of Exchange prior to the upgrade of .NET Framework 4.7, then please refer to the Exchange Supportability Matrix to validate what version of .NET Framework you need to roll back to and update the steps below accordingly. This may mean using different offline/web installers or looking for different names in Windows Update based on the version of .NET Framework you are attempting to roll back to if it is something other than .NET Framework 4.6.2.

1. If the server has already updated to .NET Framework 4.7 and has not rebooted yet, then reboot now to allow the installation to complete.

2. Stop all running services related to Exchange.  You can run the following cmdlet from Exchange Management Shell to accomplish this:

(Test-ServiceHealth).ServicesRunning | %{Stop-Service $_ -Force}

3. Depending on your operating system you may be looking for slightly different package names to uninstall .NET Framework 4.7.  Uninstall the appropriate update.  Reboot when prompted.

  • On Windows 7 SP1 / Windows Server 2008 R2 SP1, you will see the Microsoft .NET Framework 4.7 as an installed product under Programs and Features in Control Panel.
  • On Windows Server 2012 you can find this as Update for Microsoft Windows (KB3186505) under Installed Updates in Control Panel.
  • On Windows 8.1 / Windows Server 2012 R2 you can find this as Update for Microsoft Windows (KB3186539) under Installed Updates in Control Panel.
  • On Windows 10 Anniversary Update and Windows Server 2016 you can find this as Update for Microsoft Windows (KB3186568) under Installed Updates in Control Panel.

4. After rebooting check the version of the .NET Framework and verify that it is again showing version 4.6.2.  You may use this method to determine what version of .NET Framework is installed on a machine. If it shows a version prior to 4.6.2 go to Windows Update, check for updates, and install .NET Framework 4.6.2.  If .NET Framework 4.6.2 is no longer being offered via Windows Update, then you may need to use the Offline Installer or the Web Installer. Reboot when prompted.  If the machine does show .NET Framework 4.6.2 proceed to step 5.

5. After confirming .NET Framework 4.6.2 is again installed, stop Exchange services using the command from step 2.  Then, run a repair of .NET 4.6.2 by downloading the offline installer, running setup, and choosing the repair option.  Reboot when setup is complete.

6. Apply any security updates specifically for .NET 4.6.2 by going to Windows update, checking for updates, and installing any security updates found.  Reboot after installation.

7. After reboot verify that the .NET Framework version is 4.6.2 and that all security updates are installed.

8. Follow the steps here to block future automatic installations of .NET Framework 4.7:

The Exchange Team

HPE intros new ProLiant Microserver Gen10, steps backwards

HPE unveiled the ProLiant MicroServer Gen10 this week at HPE Discover and while a refresh was long overdue, the MicroServer has returned to its G7 roots with the new model.  HPE decided never to release a Gen9 model of the MicroServer, though this Gen10 feels more Gen9 than new and shiny.   The model notably returns to AMD processors, switches to a SoC motherboard, and also removes the iLO from the platform.  The MicroServer was once touted for remote-office, branch-office use cases but the absence of IMPI, critical in ROBO, makes me wonder where this is targeted.

What’s Different and New?

The new MicroServer Gen10 features AMD processors, Opteron X3000 series with 2 core or 4 core as options and turbo clock speeds up to 3.4GHz.  It scales up to 32GB, double the previous Gen8 model.  The motherboard switches from a socketed Intel design to a System on a Chip design from AMD – meaning the processor cannot be field upgraded.  The new motherboard design adds two PCI slots over 1 PCI slot in the previous Gen8 and G7 models.

For storage, 4 large form factor drive SATA slots are retained in the Gen10 model and the same limit of 4 x 4TB SATA drives in the drive slot.  With SSD advancements, the new model adds the ability to replace the DVD drive with an SSD in a 5th slot near the top of the cube, but complicating things for OS installations with the absence of virtual media through iLO.

For video, the new model includes two display ports, possibly signaling the intended use case for the system around video applications – perhaps video walls and other things.  Upgraded GPU capabilities also accompany the new model, but both of these trade-off mean that MicroServer is less of a general-use server and more specifically for uses where graphics are needed – so not home-lab users or general ROBO virtualization.

HP released the MicroServer Gen8 back in 2013/2014, with a strange cubical form factor and the new Gen10 model retains this design, in fact, allowing a Gen10 to stack on a Gen8 or vice versus.  The bezel on the model now comes in a black finish instead of the default silver look of the Gen8.

HPE is also talking about ClearOS on the MicroServer Gen10.  ClearOS is a linux distribution based on CentOS and Red Hat with a marketplace of application solutions from 6 categories: Cloud, Gateway, Server, Networking, System and Reports, according to the ClearOS website.  HPE is touting it as a cost-effective alternative in the SMB market, and showed it off on the MicroServer Gen10 during HPE Discover.

Where does it miss?

With the introduction of Gen10, HPE is touting the security of their ProLiant servers – billing them as the ‘most secure industry standard server.’  With the MicroServer, the absence of iLO means the security features do not extend to the new MicroServer.  The ‘silicon root of trust’ utilizes the iLO 5 silicon to establish that NIST compliant security, so even the inclusion of an iLO 4 would not have helped this capability.

The iLO inconsistency is not a first for HPE – a couple of the lower-end Gen8 models included iLO3 chips rather than the new iLO4 that shipped with Gen8.  Some of the Gen9 models are missing an iLO all together.  The compromise is certainly targeted towards bringing costs down, but its a trade off that I’m sad to see.  Even the original MicroServer G7 model had the option of an remote console card in the PCI slot – a waste of a PCI slot in my opinion – but at least it had the option for anyone who found it important.

If HPE wants to follow a SoC motherboard model for this line, I hope to see a Xeon-D variant in the future. A socketed model is better, but with a wide support of the Intel SoC models, I hope that at least we get an option in the future.

I’ve used a ProLiant MicroServer Gen8 in my home lab for years.  I sold it during my move, but it is truly one of the best home lab models I have found, mostly because I prefer a strong and capable IPMI.  So, that’s why I think I’m taking the absence of iLO so hard in this new model.  The community waited for a long time for this and the home lab options from SuperMicro are abundant, even with a sub-par IMPI interface.