Saturday, September 9, 2017

Update: Securing Citrix NetScaler VPX to score A+ rating on SSL Labs

Those who have used my previous blog post:

Securing Citrix NetScaler VPX to score A+ rating on SSL Labs
http://terenceluk.blogspot.com/2016/06/securing-citrix-netscaler-vpx-to-score.html

… to score an A+ on Qualys SSL Labs (https://www.ssllabs.com/ssltest/) may have noticed that they are now scoring an A- due to some minor changes to the criteria. 

There is no support for secure renegotiation. Grade reduced to A-. MORE INFO »

The server does not support Forward Secrecy with the reference browsers. Grade reduced to A-. MORE INFO »

image

The required changes to the configuration are minimal so this blog post serves to demonstrate the tweaks required to bring the score back to an A+.

The version of the NetScaler VPX I’ll be using for this demonstration is:

NS11.1: Build 49.16.nc

image

Step #1 – Confirm that the SSL certificate used is SHA2/SHA256 signature

Ensure that the SSL certificate used to secure the site uses the SHA2/SHA256 signature for both the root and intermediate.

image

Step #2 – Confirm that SSVLv3 is disabled and TLSv12 is enabled

With the appropriate certificate assigned begin by ensuring that SSLv3 is disabled and TLSv12 is enabled for the SSL Parameters of the virtual server:

image

Step #3 – Update Custom Ciphers

The ciphers listed in my previous post is outdated so proceed to remove the existing configuration or appending the new ciphers in, or creating a new one with the following ciphers:

TLS1.2-ECDHE-RSA-AES256-GCM-SHA384
TLS1.2-ECDHE-RSA-AES128-GCM-SHA256
TLS1.2-ECDHE-RSA-AES-256-SHA384
TLS1.2-ECDHE-RSA-AES-128-SHA256
TLS1-ECDHE-RSA-AES256-SHA
TLS1-ECDHE-RSA-AES128-SHA
TLS1.2-DHE-RSA-AES256-GCM-SHA384
TLS1.2-DHE-RSA-AES128-GCM-SHA256
TLS1-DHE-RSA-AES-256-CBC-SHA
TLS1-DHE-RSA-AES-128-CBC-SHA
TLS1-AES-256-CBC-SHA
TLS1-AES-128-CBC-SHA
SSL3-DES-CBC3-SHA

The following command can be used to create a new custom cipher with the required ciphers:

add ssl cipher Custom-VPX-Cipher

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES256-GCM-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES128-GCM-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES-256-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-ECDHE-RSA-AES256-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-ECDHE-RSA-AES128-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-DHE-RSA-AES256-GCM-SHA384

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1.2-DHE-RSA-AES128-GCM-SHA256

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-DHE-RSA-AES-256-CBC-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-DHE-RSA-AES-128-CBC-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-AES-256-CBC-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName TLS1-AES-128-CBC-SHA

bind ssl cipher Custom-VPX-Cipher -cipherName SSL3-DES-CBC3-SHA

With the custom cipher created, ensure that the virtual server is configured to use it:

image

Step #4 – Configure Deny SSL Renegotiation to FRONTEND_CLIENT

Navigate to Traffic Management > SSL > Change advanced SSL settings:

image

Change the Deny SSL Renegotiation setting from ALL to FRONTEND_CLIENT:

image

image

Alternatively, the following command can be executed to change the configuration:

set ssl parameter -denySSLReneg FRONTEND_CLIENT

image

-------------------------------------------------------------------------------------------------------------------------

You should now score an A+ with the adjustments listed above configured:

image

Remember to save the configuration!

Thursday, August 24, 2017

Attempting to mount Exchange Server 2016 DAG database with 1 of 2 nodes down throws the error: “Error: An Active Manager operation failed. Error: An Active Manager operation encountered an error. To perform this operation, the server must be a member of a database availability group, and the database availability group must have quorum. Error: Automount consensus not reached (Reason: ConcensusUnanimity does not allow auto mount. (IsAllNodesUp: False)).”

Problem

You have two Exchange 2016 mailbox servers configured as a DAG and one server providing witness servers.  One of the mailbox server experiences an issue and goes down so the remaining mailbox server continues to service mailbox requests and has the databases mounted.  The remaining operational server is restarted and you immediately notice that the databases are not mounted after the restart: 

image

Attempting to mount the databases with the Mount-Database command throws the following error:

[PS] C:\Windows\system32>Mount-Database contoso-edb16-01

Failed to mount database "contoso-edb16-01". Error: An Active Manager operation failed. Error: An Active Manager operation

encountered an error. To perform this operation, the server must be a member of a database availability group, and the

database availability group must have quorum. Error: Automount consensus not reached (Reason: ConcensusUnanimity does

not allow auto mount. (IsAllNodesUp: False)). [Server: contoso-MBX16-01.contoso.NET]

    + CategoryInfo          : InvalidOperation: (contoso-EDB16-01:ADObjectId) [Mount-Database], InvalidOperationException

    + FullyQualifiedErrorId : [Server=contoso-MBX16-01,RequestId=ae45aaee-8113-4908-a0fd-34e3d4a032a2,TimeStamp=17/08/2017

    12:20:16] [FailureCategory=Cmdlet-InvalidOperationException] A5CACA44,Microsoft.Exchange.Management.SystemConfigu

  rationTasks.MountDatabase

    + PSComputerName        : contoso-mbx16-01.contoso.net

[PS] C:\Windows\system32>Get-DatabaseAvailabilityGroup

image

Executing the Get-DatabaseAvailabilityGroup cmdlet displays the following message:

Warning: Unable to get Primary Active Manager information due to an Active Manager call failure. Error: An Active Manager operation failed. Error: An Active Manager operation encountered and error. To perform this operation, the server must be a member of a database availability group, and the database availability group must have a quorum. Error: Automount consensus not reached (Reason: ConcensusUnanimity does not allow auto mount. (IsAllNodesUp: False)).

image

Executing the Get-MailboxDatabaseCopyStatus * cmdlet indicates the status of the mailbox databases in the DAG as Unknown:

image

Solution

The reason why the databases would not automount and manually mounting them would fail is because the DAG has Datacenter Activation Coordination (DAC) mode enabled and this forces starting DAG members to acquire permission in order to mount any mailbox databases.  In the example above, the DAG is unable to achieve a quorum with the second node down and therefore the DAG isn’t started and databases would not be able to mount.  If you are sure that the second node is down as in the example above, you can manually start the DAG with the cmdlet:

Start-DatabaseAvailabilityGroup -Identity <DAG NAME> -MailboxServer <MailboxServerName>

image

The status of the mailbox databases should now be listed from Unknown to Dismounted once the DAG has been started and issuing the Mount-Database cmdlet will now successfully mount the databases:

image

The following are TechNet blog posts that provide a more in depth explanation of DAG and DAC:

Part 1: My databases do not automatically mount after I enabled Datacenter Activation Coordination
https://blogs.technet.microsoft.com/timmcmic/2012/05/21/part-1-my-databases-do-not-automatically-mount-after-i-enabled-datacenter-activation-coordination/


Part 5: Datacenter Activation Coordination: How do I force automount consensus?
https://blogs.technet.microsoft.com/timmcmic/2013/01/27/part-5-datacenter-activation-coordination-how-do-i-force-automount-consensus/

Wednesday, August 23, 2017

Attempting to export an Exchange Server mailbox to PST throws the error: “Couldn’t locate a database suitable for storing this request.”

I’ve noticed that many of my colleagues and clients have asked me about the following error that is thrown when they attempt to export an Exchange Server mailbox to PST so I thought it would be a good idea to quickly write a post about the error.

Problem

You attempt to export a mailbox to PST via the Exchange Admin Center but received the following error:

Couldn’t locate a database suitable for storing this request.

image

Using the New-MailboxExportRequest feature displays a similar error:

[PS] C:\Windows\system32>New-MailboxExportRequest -Mailbox mbraithwaite -FilePath "\\tmrfp09\archive$\Outlook Archive\mb
raithwaite.pst"
Couldn't locate a database suitable for storing this request.
     + CategoryInfo          : InvalidArgument: (mbraithwaite:MailboxOrMailUserIdParameter) [New-MailboxExportRequest],
     MailboxDatabase...manentException
     + FullyQualifiedErrorId : [Server=contBMEXMB01,RequestId=c7446094-7d17-4e06-90c4-07be8ca10829,TimeStamp=8/23/2017 2
    :46:00 PM] [FailureCategory=Cmdlet-MailboxDatabaseVersionUnsupportedPermanentException] 4B192EAA,Microsoft.Exchang
   e.Management.Migration.MailboxReplication.MailboxExportRequest.NewMailboxExportRequest
     + PSComputerName        : contbmexmb01.contoso.com

[PS] C:\Windows\system32>

image

Solution

The reason why this error would be thrown is if you are trying to export a mailbox that is on a different version than the admin console you are working from.  In the example above, the attempt was made from the Exchange 2016 admin center but the mailbox actually resides on an Exchange 2010 server.  Simply execute the export job from the PowerShell prompt of one of the Exchange 2010 servers to get the mailbox to export.

Tuesday, August 22, 2017

Exchange 2010 users are no longer able to connect via Outlook Anywhere while migrating to Exchange 2016

I’ve recently had to migrate a client from Exchange 2010 to 2016 and quickly noticed that Outlook Anywhere no longer worked after redirecting Outlook Anywhere and other services such as autodiscover and webmail to the new server.  Outlook Anywhere continued to work for users migrated to Exchange 2016 but not for users still on the legacy Exchange server.  Using the Remote Connectivity Analyzer (https://testconnectivity.microsoft.com/) Outlook Connectivity feature would fail and throw the following error:

Attempting to send an Autodiscover POST request to potential Autodiscover URLs.

Autodiscover settings weren't obtained when the Autodiscover POST request was sent.

clip_image001[8]

Additional Details

Elapsed Time: 1504 ms.

clip_image001[9]

Test Steps

clip_image003[4]

The Microsoft Connectivity Analyzer is attempting to retrieve an XML Autodiscover response from URL https://autodiscover.domain.com:443/Autodiscover/Autodiscover.xml for user user@domain.com.

The Microsoft Connectivity Analyzer failed to obtain an Autodiscover XML response.

clip_image001[10]

Additional Details

A Web exception occurred because an HTTP 400 - BadRequest response was received from Unknown.
HTTP Response Headers:
request-id: 0d7c484b-cdfb-42eb-bd3b-8d5b6dfb4844
X-CalculatedBETarget: exchange2010-02.domain.com
Persistent-Auth: true
X-FEServer: exchange-2016-02
Strict-Transport-Security: max-age=157680000
Content-Length: 346
Cache-Control: private
Content-Type: text/html; charset=us-ascii
Date: Wed, 16 Aug 2017 17:06:18 GMT
Set-Cookie: X-BackEndCookie=S-1-5-21-206374890-975330658-925700815-6573=rJqNiZqNgauyrbqnt7zPzdGLkJSWkJKWk5OakZGWipLRnJCSgc7GzMjGxsjGy8iBzc/OyNLPx9LOyavOyMXOycXOxw==; expires=Wed, 16-Aug-2017 17:16:18 GMT; path=/Autodiscover; secure; HttpOnly
Server: Microsoft-IIS/8.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET

Elapsed Time: 1504 ms.

I was unable to find an official Microsoft KB which described this issue but I was able to come across this blog post for migrating from Exchange 2007 to Exchange 2013:

Exchange 2013 to 2007 Outlook Anywhere Proxy Issue
https://smtp4it.net/2013/12/05/exchange-2013-to-2007-outlook-anywhere-proxy-issue/

… and I can confirm that after adding the registry keys onto the Exchange 2010 servers as such:

image

… then restarting the servers was able to correct the Outlook Anywhere problem for Exchange 2010 users during the Exchange 2016 migration:

image

Wednesday, August 16, 2017

Attempting to launch a Citrix XenApp / XenDesktop 7.x application published with a NetScaler VPX fails with: “Unable to launch your application. Contact your help desk with the following information: Cannot connect to the Citrix XenApp server. Network issues are preventing your connection. Please try again. If the problem persists, please call your help desk.”

Problem

You attempt to launch a Citrix XenApp / XenDesktop 7.x application published with a NetScaler VPX:

image

image

The following Citrix Receiver Remote Desktop Connection window is presented and displays the progress bar:

Starting…

More information

image

Clicking on the More information button displays:

Connection in progress…

Less information

image

The progress bar does not proceed any further and the process eventually fails with the message:

Unable to launch your application. Contact your help desk with the following information:

Cannot connect to the Citrix XenApp server. Network issues are preventing your connection. Please try again. If the problem persists, please call your help desk.

image

Attempting to launch the XenApp desktop displays the launch window:

image

… but will fail with:

The connection to “XenApp Weir Desktop” failed with status (Unknown client error 1110).

image

Solution

While there could be several reasons why this error would be thrown, one of the possible causes is if the Citrix session reliability port 2598 is blocked from NetScaler to application server.  Ensure that the NetScaler can access the XenApp server via TCP port 2598.

Friday, July 14, 2017

“vSphere Replication does not support changing the length of a replicated disk.” error is thrown when attempting to expand a hard disk of replicate virtual machine

Problem

You attempt to expand a hard disk of a vSphere Replication replicated virtual machine but immediately receive the following error:

Reconfigure virtual machine

Invalid or unsupported virtual machine configuration.

See the error stack for details on the cause of this problem.

vSphere Replication does not support changing the length of a replicated disk.

image

Solution

The reason why the error is thrown is because vSphere Replication prevents sizing changes to the protected copy of the virtual machine (source) when it is replicated to a recovery copy of the virtual machine (target).  This makes sense because the replication engine likely tracks the changes in a certain way where changing the disk size would cause issues.  VMware has released the following two KBs to explaining the steps required to expand a replicated virtual machine’s hard disk:

Resizing virtual machine disk files that are protected by vSphere Replication (VR) using VMware vCenter Site Recovery Manager (2042790)
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2042790

Cannot resize the vmdk files during replication which are protected by vSphere replication (2052883)
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2052883

… but I find that the instructions aren’t completely clear so I thought I’d demonstrate the process with screenshots so there are no confusions.

Step #1 - Document Replicated Virtual Machine's vSphere Replication Configuration

Begin by documenting replicated virtual machine's vSphere Replication configuration because you will need this information when specifying the Target Location in one of the later steps (screenshots usually suffice assuming the path fits in the text field):

image

Ensure that you can read the full path of the Target Location:

image

The same applies to every Hard disk Target Location field:

image

image

Document the Quiescing method configuration:

image

Document the Recovery Settings and I usually select Cancel to avoid unintentionally making any changes:

image

Step #2 - Rename the Replicated Virtual Machine's Datastore Folder(s)

Proceed to browsing the Target Location datastore of where the replicated virtual machine is stored.  Note that this is the replicated copy and NOT the live copy:

image

One of the reasons why it is important to document the Target Location of the replicated copy is because the VMDK files are not always stored in the same directory as the VMX files as shown in this example:

image

Proceed to rename the replicated copy's folder as such.  Remember that this is the replicated copy and NOT the live copy:

image

Rename any additional folders that store the replicated virtual machine's files:

image

Step #3 - Stop the Virtual Machine's Replication 

I find the step #2 outlined in the KB:

Cannot resize the vmdk files during replication which are protected by vSphere replication(2052883)
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2052883

... unclear but it states:

2. Disable replication of the virtual machine you want to resize.

The fact that there is no disable option causes confusion.  Step #3 in the KB article:

Resizing virtual machine disk files that are protected by vSphere Replication (VR) using VMware vCenter Site Recovery Manager (2042790)
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2042790

... is much clearer as it states:

3. Stop replication for the virtual machine at the protected site using the vSphere Replication User Interface (UI).

So with this in mind, proceed to right-clicking on the replicate virtual machine in the vSphere Replication console and select Stop:

image

image

Note that it is important to you have renamed the Target Location virtual machine folders.  From what I've seen, if the replicated VM was seeded then the target files would not be deleted but if the replicated VM was not seeded then the files would get deleted.

image

You should now see tasks executed under the Recent Tasks pane indicating replication is being disabled for the virtual machine:

image

The virtual machine should no longer be displayed once the operation completes.

Step #4 - Expand Source/Live Virtual Machine's VMDK

With replication stopped, you should now be able to expand the source/live virtual machine's VMDKs so proceed to expanding them to the size required.

Step #5 - Expand Target/Replicated Virtual Machine's VMDK

Since the target/replicated virtual machine is not inventoried on a host, expanding the drives will need to be done with the vmkfstools command.  Proceed by either accessing the console or SSH to a host that has access to the datastore and navigate to the directory of the renamed folders containing the replicated virtual machine files. 

**The ls -lah command can be used to display the files in the command line.

Once in the directory containing the files, proceed to increase the hard drive VMDK file with the command:

vmkfstools -x XXXGB <filenameOfVMDK>

A similar output below will be displayed upon successfully increasing the VMDK:

image

Refreshing the datastore browser will show the new size of the VMDK:

image

Continue by renaming the folder back to the original name:

image

image

Proceed to reconfigure replication for the virtual machine:

image

image

imageimage

image

The default Target location that is used will most likely be different than the folder with the replicated VMDKs so use the previously documented configuration to select the same Target location as the original location:

image

Once the previous Target location has been configured, the amount of hard disks of the replicated virtual machine will now be displayed (there are 4 in this case):

image

Configure all of the hard disks to use the same folder as the previous location:

image

image

Selecting the folder with the existing replicated VMDK will display the following message:

Replication Seed Confirmation

Duplicate file found. Do you want to use this file as a seed?

Select Yes when receiving this prompt:

image

image

Continue and repeat the same procedure for the rest of the disks and the same Replication Seed Confirmation prompt should be displayed:

imageimage

image

----------------------------------------------------------------------------------------------------------------------------------------------

Note that I’ve noticed there are times when the wizard would prompt all of the disks at the same time rather than prompting for each individual disks as shown above:

image

----------------------------------------------------------------------------------------------------------------------------------------------

Configure the Quiescing method as previously documented:

image

Configure the Recovery settings as previously documented and complete the configuration by clicking Finish::

imageimage

The replicated virtual machine should now be displayed again with an Initial Full Sync Status:

image

Clicking on the i icon in the GUI would provide information:

image

To obtain more information on the status of the synchronization, log onto the esxi host with the protected VM is inventoried and execute:

vim-cmd vmsvc/getallvms

... to list all the VMs along with their Vmid:

image

Note the Vmid and execute the command:

vim-cmd hbrsvc/vmreplica.getState <Vmid>

This will display an output similar to the following:

image

I usually periodically execute the vim-cmd hbrsvc/vmreplica.getState command to check the progress as it provides more information:

image

The time it takes for the process to complete will vary depending on the size of the virtual machine but note that only changes are replicated over and not the full virtual machine.  The following are some screenshots taken during the synchronization:

imageimage

image

After the required time, the virtual machine should get back to an OK Status as such:

image

Hope this would help anyone looking for what the full process of expanding a replicated virtual machine’s hard disk would look like.