Avantgarde Technologies

<a href="http://www.avantgardetechnologies.com.au">Avantgarde Technologies</a>
Perth's IT Experts

Friday, March 27, 2015

PowerShell - Find All Files Beginning With

A customer of mine was hit with another one of those Viruses which encrypt all data on shared drives mapping back to the file server.  The entire shared drive was encrypted and users were no longer able to access documents on the volume.

I restored all encrypted files from backup however I still had these HELP_DECRYPT Ransome ware files in every directory on the file server.

As a result I needed an easy way to find and delete each of these files.


First set the path you want to search, mine was H:\Shared.

Next run the following command to search any files containing HELP_DECRYPT with the following command:

Get-ChildItem $Path -Recurse | Where{$_.Name -Match "HELP_DECRYPT"}

 This went through and listed all of these HELP_DECRYPT files in every directory of the file server recursively.

After you have carefully went through all the results and confirmed that no legitimate files were listed, you can pipe the output from the Get-ChildItem command into Remove-Item cmdlet.

After piping the Output into Remove-Item, run the command to list the items again to ensure they were all deleted correctly.  Getting no output as per above means the files were removed successfully.

Saturday, March 21, 2015

Repairing ContentIndexState on DAG Nodes in Exchange 2013

In Exchange Server 2013, sometimes the content index state can go corrupt on databases.  When this occurs, Exchange 2013 FAST search technology is no longer available for the database meaning people cannot search for content from OWA, Active Sync or Outlook in online mode.

In DAG environments, you can simply reseed the Content Index State from a healthy node in the cluster.  I will show you how to perform this in this blog post.

Here we have two databases in a DAG cluster which have the index status in a failed state:

To manually update the ContentIndexState from a healthy node simply run the following command:

Update-MailboxDatabaseCopy "database\server" -CatalogOnly

In my case the database I want to update was "CCEX2-DB-02" on server "CCEX1" so I ran:

Update-MailboxDatabaseCopy "CCEX2-DB-02\CCEX1" -CatalogOnly

After running the command you can see we brought the index for the database back to a healthy state.

Repeat this process for any other databases with a failed content index.

Friday, March 20, 2015

Public Folder Migration Error:Property expression isn't valid

When migrating public folders from legacy Exchange 2007 and Exchange 2010 environments to Exchange 2013, you may receive the following error message:

Error: Property expression "Anglicare RT" isn't valid. Valid values are: Strings formed with characters from A to Z (uppercase or lowercase), digits from 0 to 9, !, #, $, %, &, ', *, +, -, /, =, ?, ^, _, `, {, |, } or ~. One or more periods may be embedded in an alias, but each period should be preceded and followed by at least one of the other characters. Unicode characters from U+00A1 to U+00FF are also valid in an alias, but they will be mapped to a best-fit US-ASCII string in the e-mail address, which is generated from such an alias.

This is caused by an invalid alias format generally created from legacy versions of Exchange such as 2000 or 2003.  Exchange 2010/2013 does not support spaces in the Public Folder Alias.

On the legacy Exchange 2007/2010 server open up Public Folder Management console and navigate to the Public Folder in question.  On the Exchange General tab you will receive an error message saying the object contains invalid data.

Simply remove the space from the Alias, this is no longer supported.

Next go back to your Exchange 2013 server and resume your public folder migration request.
You may need to repeat this process a few times as it is likely multiple public folders have incorrect aliases.

Monday, February 23, 2015

VSS Snapshot error. The maximum number of snapshots for this volume has been reached.

Today on a customers SBS 2008 server still running Backup Exec 2010, backups started failing with the following error.

 - AOFO: Initialization failure on: "\\JCC-SBS\Microsoft Information Store\First Storage Group". Advanced Open File Option used: Microsoft Volume Shadow Copy Service (VSS).

V-79-10000-11234 - VSS Snapshot error. The maximum number of snapshots for this volume has been reached.  Microsoft Volume Shadow Copy Service (VSS)  will not allow any more snapshots.

The Advanced Open File Option was set to Automatically select open file technology - I changed this to use "System - Use Microsoft Software Shadow Copy Provider".

After making this change, I tested another backup and it completed successfully.

Interestingly, if you read the error it was already using the Microsoft Volume Shadow Provider yet it failed.  Hardcoding it in the settings of the backup job did the trick.

Phantom Calls to Cisco Phone with Hosted VOIP Service - How to Lockdown

An issue was experienced where a number of VOIP phones on a customer network were experiencing phantom calls from the Internet.  A phantom call generally occurs when someone is scanning for SIP devices on a network using a port scanning program.  A port scan against SIP (usually UDP 5060 and 5061) will cause many SIP handsets to annoyingly ring and when the user answers, no one is there!

As you could gather, this is very annoying for users on the network.

The solution for this is locking down SIP to only the Public IP addresses of your SIP IP Gateway from your VOIP provider on the public interface of the router.  This customer was using a Cisco 887va router connected on a DSL connection so the public interface in this instance is dialer0.

First of all you need to create a access rule.  Here is the Access Rule I created (this matches the Faktortel VOIP providers public IP addresses here in Australia).  Faktortel also use 5062, 5063, 5064 and 5065 UDP ports for SIP communication.

ip access-list extended VOIPLockDown
remark allow SIP from certain addresses
permit udp host any eq 5060
permit udp host any eq 5061
permit udp host any eq 5062
permit udp host any eq 5063
permit udp host any eq 5064
permit udp host any eq 5065
permit udp host any eq 5060
permit udp host any eq 5061
permit udp host any eq 5062
permit udp host any eq 5063
permit udp host any eq 5064
permit udp host any eq 5065
permit udp host any eq 5060
permit udp host any eq 5061
permit udp host any eq 5062
permit udp host any eq 5063
permit udp host any eq 5064
permit udp host any eq 5065
permit udp host any eq 5060
permit udp host any eq 5061
permit udp host any eq 5062
permit udp host any eq 5063
permit udp host any eq 5064
permit udp host any eq 5065
permit udp host any eq 5060
permit udp host any eq 5061
permit udp host any eq 5062
permit udp host any eq 5063
permit udp host any eq 5064
permit udp host any eq 5065
permit udp host any eq 5060
permit udp host any eq 5061
permit udp host any eq 5062
permit udp host any eq 5063
permit udp host any eq 5064
permit udp host any eq 5065
remark block SIP from all other addresses
deny udp any any eq 5060
deny udp any any eq 5061
deny udp any any eq 5062
deny udp any any eq 5063
deny udp any any eq 5064
deny udp any any eq 5065
remark Allow all other IP traffic
permit ip any any

This rule permits all SIP traffic to all of their public IP addresses then blocks all other traffic entering on these ports and finally permits any/any.

Next you need to assign the access rule to the public interface on the router you wish to filter.  Because this is a Cisco 887va router on DSL, the public interface is Dialer0.  Simply apply the access rule to Dialer0 as follows:

interface dialer0
ip access-group VOIPLockDown in

After applying this config change, all phantom calls to phones on the network stopped.

Important: Some SIP providers use both TCP and UDP for SIP, so it can be useful to block both UDP and TCP on these port numbers.  For this customer, the SIP handsets only supported UDP so there was no point blocking the TCP ports.  If your not sure, block both by slightly modifying the config above.

Tuesday, February 3, 2015

Direct Access Clients Cannot Establish Network Connectivity after NLS Becomes Unavailable

In Microsoft Direct Access Deployments, you need to configure a server (or server cluster) on your network to act as a Network Location Server (NLS).  The NLS is simply a Web Server with a HTTPS server certificate... see the following TechNet article on how to configure an NLS:


If the DirectAccess client can establish a connection to the NLS, it will believe that it is on the internal network and the DirectAccess tunnels will not be established. If a connection to the NLS cannot be established, the DirectAccess client believes it is outside the corporate network and will attempt to establish DirectAccess connectivity. It is for this reason that the NLS should not be resolvable in public DNS and should not be reachable externally. If it is, DirectAccess clients will always think they are inside the corporate network, and DirectAccess will never be established. Also, it is extremely important that the NLS be highly available. If the NLS server is offline or unreachable for any reason, DirectAccess clients that are on the internal network will suddenly think they are outside the corporate network and attempt to establish DirectAccess connectivity. This will fail, leaving DirectAccess clients unable to connect to any internal resources until the NLS is once again available.  For this reason it is strongly recommended that you implement at least two NLS servers in a cluster, using either Network Load Balancing (NLB) or an external hardware load balancer.

However for smaller deployments of Direct Access, a NLS cluster for redundancy is not always practical and sometimes only a single NLS server is deployed.  This is often the case with all in one Direct Access Deployments running Windows Server 2012 R2 when a single Direct Access server is deployed to act as both the NLS and Direct Access endpoint.  If the all in one Direct Access server was to fail, all clients on the Internal network will no longer think they are on the internal network as they cannot contact the NLS and as a result attempt to establish a Direct Access connection which will also fail leaving them in a state of no connectivity to any domain names configured with a Name Resolution Policy Table (NRPT) in the Direct Access group policy.  Most Direct Access deployments have the Active Directory domain namespace configured as an NRPT policy and sometimes additional domain names.  To understand what NRPT is, please see the following blog post:


What Happens if Your In this Situation?

What happens if your in this situation when you have a single Direct Access server which has failed and as a result all Direct Access configured clients on the internal network now have no network connectivity?

The NRPT Policies which are pushed out via Group Policy are located under the following registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\DnsClient\DnsPolicyConfig

You can manually connect to the direct access computers and remove these registry keys by IP address using the "Connect Network Registry" option in regedit.exe.  Once you are connected to the remote computer, you can delete the two NRPT policies named DA-{GUID].

After you have deleted the two policies remotely, simply reboot the workstation and it will regain connectivity.

I know this is a very manual process and can be very time consuming if there are hundreds of computers experiencing the issue - I would like to hear from Microsoft for a better solution in this scenario.  I sent of an email regarding this and hopefully will hear back soon!

It is also important to note, for my customer who's Direct Access server failed, rebuilding a new server with the same IP address, hostname and configuration did not resolve this issue.  I still needed to delete the NRPT policies from the registry from the effected computers and reboot.  After this, the clients had restored connectivity to the internal network and were able to refresh group policy again in which they downloaded the new Direct Access Client Settings containing the new NRPT policies.

Friday, January 23, 2015

Test Application for Group Policy Software Installation Troubleshooting

Sometimes when diagnosing problems with Group Policy software deployment, it is good to try a different application to rule out issues with the application package your trying to deploy.  One application package I have found which is great for testing Group Policy software deployment is "RealWorld Paint".  This is a lightweight MSI installer (8.7 MB) in size and deploys easy through Group Policy to Windows XP, Windows 7 and Windows 8.1 machines.

You can download this MSI installer from the following location:


If you have issues deploying this with Group Policy, then the issue does not lay with the MSI package your trying to deploy but something else!

Also check out this common issue with Group Policy application deployment, it may be of help in diagnosing your Group Policy deployment issue:


Tuesday, January 20, 2015

HyperV Network Types

Microsoft HyperV servers have three Network Types you can assign to Virtual Machines and Virtual Switches.
  • Private
  • Internal
  • External
This is shown below on both a virtual machine and a virtual switch configuration.

What is the difference between these?
An External Network Type provides communication between a virtual machine and a physical network by creating an association with a physical network adapter on the virtualization server.  This is the most common type used by organisations.
An Internal Network provides communication between the virtualization server and the virtual machines.
A Private Network only provides communication between the virtual machines and not the HyperV host server.
Internal and Private network types can be confusing - the only difference is Internal also allows the virtual machines to communicate with the HyperV host server.

Thursday, January 8, 2015

GFI MailEssentials 2014 SR2 Transport Issues during Exchange Migration

I had an issue with a third party email filtering product GFI MailEssentials 2014 SR2 during an Exchange 2010 to Exchange 2013 migration.  GFI MailEssentials 2014 SR2 is a spam filtering product which you install directly on an Exchange transport server.  It integrates with Exchange server through six transport agents which all perform various tasks as shown below:

When migrating to a new version of Exchange, as part of the process you are required to redirect mail flow to the new Exchange transport server so it can route the mail to the legacy transport environment until a point when the mailboxes can be moved.
As the new Exchange 2013 server will be the new external point of connectivity for SMTP, I installed GFI Mail Essentials on the new Exchange 2013 server and redirected mail flow as shown below.
After making this change, users were not able to receive email from external users.  I verified the following things:
  • Exchange 2013 was receiving emails from external users as validated in the SMTP Protocol logs.
  • Exchange 2013 was forwarding emails to the Exchange 2010 server as per standard functionality.
  • Exchange 2010 successfully received the email communication from Exchange 2013 at transport level and was verified in the protocol logs.
  • GFI MailEssentials Transport Agents on the Exchange 2010 server receive the email for processing.
  • GFI MailEssentials does not place the email back into the Exchange Pickup folder giving it back to Exchange for processing.
I was not able to locate where the emails were moved to within GFI primarily due to my limited knowledge of the product (to me it is just a custom Exchange transport agent).  I contacted GFI support here in Australia who were also unable to advise me where the emails went after being relayed to the Exchange 2010 server.  Fantastic, so we have emails going into a black hole disappearing forever.
One thing GFI support were able to advise me was their transport agents only filter email which was sent from a public IP address, all private IP addresses were excluded from filtering.  This was in line with my symptoms, all users internally were able to receive emails sent from internal devices such as Printers being relayed through the Exchange 2013 server.
In the following screenshot I have included the message tracking log from the Exchange 2010 server.  The first two entries are from when the Cisco Router was forwarding email directly at the Exchange 2010 server.  All other entries are from when mail was relaying through Exchange 2013.  As you see email is received via SMTP however not delivered to the information store via the Exchange Store Driver due to GFI not releasing the mail.
GFI Mail Essentials modifies the header of emails that are filtered and appears to not deal with emails correctly which already have their header modified by another instance of GFI Mail Essentials.
As a work around I simply disabled the GFI Transport Agents on the Exchange 2010 server to prevent it from interfering with mail processing to resolve the issue as shown below:
This resolved the issue and did not compromise the environment as security and spam filtering was now being performed by GFI on the Exchange 2013 server.

Wednesday, January 7, 2015

Microsoft Exchange Virus Scanning API (VSAPI) Removed

Back at the MVP Summit 2012 in Redmond, Microsoft announced to the Exchange MVP community that in Exchange 2013 they are going to pull the Microsoft Exchange Virus Scanning API (VSAPI) from the product.  This API is what allows anti-virus products to scan inside the information store for emails.  This early news came to me with a big smile on my face!

For years I have been advising customers NOT to install anti-virus products which scan the information store as it causes unnecessary load on the information store and has caused database corruption at some of my customers.  Despite my advice, some of my clients go ahead and installed this functionality anyway to meet a "compliance" checkbox which some integrator has flagged in a security audit.

I have always advised customers to perform anti-virus scanning at a transport level (SMTP) and flag emails before they reach the database to improve performance and allow for greater scalability.  It is important to note however, Anti-Virus products are always releasing new definitions and it is possible that a virus was allowed in due to the definitions not being able to detect it initially but being able to detect it at a later date.  Hence, this still proposes a risk to the business and can be caught using third party Anti-Virus products which use the Microsoft Exchange Virus Scanning API (VSAPI) right?  Well yes this is true, however I still do not recommend this.  A better solution is to run cached Exchange mode and allow client side Anti-Virus products to scan the users offline cache "OST file" for viruses on a regular basis and offset this load from your already busy mail servers.  This approach meets the same objective and does not require use of the Microsoft Exchange Virus Scanning API.

One of my customers who went against my advise and refused to disable Information Store scanning due to compliance requirements on Exchange 2010 now has no option but to remove it.   Microsoft Support must have finally had enough of dealing with issues from third party Anti-Virus products causing information store issues just like me!

Have a look at the following screenshot comparing a product GFI MailEssentials 2014 SR2 which has the ability to scan the information store on a Exchange 2010 server compared to an Exchange 2013 server.  Under the Email Security menu on the Exchange 2013 server (on the right), you will se the feature gone for good... not just in GFI but all anti-virus products.

A big thankyou to Microsoft for removing this API - this is one less argument I need to have with my customers!

Lastly, I still always recommend companies AppLock with Microsot AppLocker in Enterprise edition of Windows and do away with definition scanning Anti-Virus solutions, they are a thing of the past.  This is another argument I'm still having with the security compliance guys stuck in the dark ages, but we will save this for another blog post.

Sunday, January 4, 2015

RemoteApp Disconnected This computer can't connect to the remote computer

After deploying a Remote Desktop Services environment on Server 2012 R2, users complained of an issue where they were no longer able to launch RemoteApps from the RD Web App portal.  The error they received was:

RemoteApp Disconnected

This computer can't connect to the remote computer.

Try connecting again. If the problem continues, contact the owner of the remote computer or your network administrator.

To resolve this issue open the properties for your application collection experiencing problems, navigate to security and untick "Allow connections only from computers running Remote Desktop with Network Level Authentication".

This sorted out the problem in my environment.

Wednesday, December 24, 2014

Exchange 2013 FAST Search Technology Failed

I had a customer where Exchange 2013 FAST Search Technology failed and rebuilding the search indexes for the databases did not resolve the problem on a single Exchange 2013 SP1 server.

In this blog post I'm going to share with you the symptoms of my issue along with the resolution.

Symptoms of Issues

Users on the network were unable to search their mailbox from Outlook Web App or using Microsoft Outlook in online mode.

In addition to not being able to search in Outlook Web App or Microsoft Outlook, the Test-ExchangeSearch cmdlet failed with "Time out for test thread".

Despite this issue, the ContentIndexState on the databases remained in a Healthy report status with the Get-MailboxDatabaseCopyStatus cmdlet as shown below:

In the Server Application logs the following error was logged numerous times:

Log Name:      Application
Source:        MSExchangeIS
Date:          4/12/2014 1:54:06 PM
Event ID:      1012
Task Category: General
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      ExchangeServer.domain.local
Exchange Server Information Store has encountered an error while executing a full-text index query ("eDiscovery search query execution on database a0976351-948a-4625-8840-f649f8b98e0e failed."). Error information: System.ServiceModel.EndpointNotFoundException: Could not connect to net.tcp://localhost:3847/. The connection attempt lasted for a time span of 00:00:02.0821592. TCP error code 10061: No connection could be made because the target machine actively refused it  ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it

Checking the size of the catalog folder was only around 16MB when the database size was 150GB.  From my experience a FAST index for a 150GB database is generally between 5GB and 10GB in size depending on how small the items are inside the database.

Issue Resolution

To resolve the issue the following steps were taken in order.

1. Stop the Microsoft Exchange Search and Microsoft Exchange Search Host Controller service.
2. Remove all the files under “Fsis” file:

C:\Program Files\Microsoft\Exchange Server\V15\Bin\Search\Ceres\HostController\Data\Nodes\Fsis
We moved these files to another folder for backup purposes.

3. Open EMS and “Run as Administrator"

4. Navigate to directory C:\Program Files\Microsoft\Exchange Server\V15\Bin\Search\Ceres\Installer

5. Launch the script and run the command below:

.\installconfig.ps1 -action I -datafolder "c:\Program Files\Microsoft\Exchange Server\V15\Bin\Search\Ceres\HostController\Data"
This script will recreate the files under "C:\Program Files\Microsoft\Exchange Server\V15\Bin\Search\Ceres\HostController\Data\Nodes\Fsis" that we backed up/removed previously.

6. Once the script installs the data, you need to uninstall and reinstall again (don't ask, it did not work for me without doing this):

.\installconfig.ps1 -action U -datafolder "c:\Program Files\Microsoft\Exchange Server\V15\Bin\Search\Ceres\HostController\Data"

Then again installed Search component with below command :

.\installconfig.ps1 -action I -datafolder "c:\Program Files\Microsoft\Exchange Server\V15\Bin\Search\Ceres\HostController\Data"

7. Force Active Directory Replication from a domain controller in the same AD Site as the Exchange 2013 server with repadmin /syncall /APeD.

8. Check on the Exchange Server with Get-MailboxDatabaseCopyStatus and review the ContentIndexState.

When the Content Index process has finished crawling we can see the index size of the folder is now 8GB and searching is now working as normal.


Tuesday, December 23, 2014

Filtering Exchange MessageTrackingLog Output from Get-MessageTrackingLog Cmdlet

In todays blog I am going to show you how to filter Message Tracking Log output from the Get-MessageTrackingLog command.  I had a customer from an organisation example.com who wanted to see all external recipients who received an email with the message subject "Staffing Update".

Without using some PowerShell filtering, you can view all tracking log related entries for this against all Transport Servers in your organisation by running:

Get-TransportService | Get-MessageTrackingLog -MessageSubject "Staffing Update"

Note: If your running Exchange 2007 or 2010 replace Get-TransportService with Get-TransportServer.

This command will return all messages relayed in the last 30 days by default which is the amount of time the message tracking logs hang around for by default.

However if we want to meet my customers requirement, we want the output to not contain any recipients for "*@example.com" so that we can focus on emails leaving the company.  When I say external users or "leaving the company", I mean users who do not have an email address for "*@example.com", you may have an Exchange Organisation where you have multiple accepted domains setup for multi-tenancy!

To do this we need to do some filtering with PowerShell.  This can be achieved using the where{ } command as follows:

Get-TransportService | Get-MessageTrackingLog -MessageSubject "Staffing Update" | where{$_.recipients -notlike "*@example.com"}

But say we only want to see which users received this message with a @example.com email address.  Easy this can be done by reverting the -notlike to a -like.

Get-TransportService | Get-MessageTrackingLog -MessageSubject "Staffing Update" | where{$_.recipients -like "*@example.com"}

You can also easily filter senders by replacing the $_.recipients line with $_.senders like shown below:

where{$_.sender -like "*@example.com"}

Happy Filtering!

Wednesday, December 10, 2014

Microsoft PST Capture 2.0 Not Supported on Exchange 2013

The PST Capture 2.0 tool is not supported with Exchange 2013 despite being documented as supported under the following Microsoft Exchange Product Team blog post:


I have used this tool successfully with previous versions of Exchange Server however with Exchange Server 2013 this tool does not work.  I have tested PST Capture 2.0 with Exchange Server 2013 SP1 and Update Rollup 6, both unsuccessful.  Testing has been performed both in my lab environment and at a customer site.

I logged this issue with Microsoft under a professional services support case in which Microsoft were able to reproduce the issue in their lab environment.  This issue was then escalated within Microsoft and an issue identified with the tool with no fix provided.  Microsoft explained they are under no obligation to develop a hotfix for this tool as the End User Licensing Agreement (EULA) states that the software is provided "as is" and they are not under obligation to support it should issues or bugs be flagged with the software.  This is shown in my screenshot below.

When attempting to import a mailbox with PST Capture 2.0, the following error message is flagged in the tool:

Import error: Error opening mailbox

This issue is generally caused by one of the following issues as documented on the following website:


However with Exchange 2013, none of these resolve the problem.

The PST Capture tool has a more advanced logging available for errors which can be found under the following location:

"C:\ProgramData\Microsoft\Exchange\PST Capture\Logs\MapiComServer 15"

The technical error which is generated when attempting to import to Exchange 2013 is as follows:

Failed to open default store - HR 8004011d

Hopefully Microsoft does get around to developing a fix/work around for this product as it will be a shame to purchase this tool from Red Gate Software and then not maintain it hence making it redundant.

Wednesday, December 3, 2014

How to Change Network Profile in Registry for 2008 Server

In a previous post I wrote about how to change the network location or "firewall profile" in Windows Server 2012 using PowerShell as you could no longer do this from the GUI like you could in previous versions of Windows Server. This article can be found under the following URL:


However yesterday I had a problem at a customer running Windows Server 2008 where I was unable to change the network location from the GUI using Network and Sharing Centre and as a result I had to revert to the registry.  Tracking down the registry key responsible for the network profile was a difficult task so I decided to blog this one.

All network profiles are located under the following registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Profiles

Each network card will be represented below in a sub key with a GUID.  The Network Location is controlled by a DWORD value called Category.

Public = 0
Home = 1
Domain = 2

As you see my category is set to 2 which means this network adapter is using the Domain Firewall profile.

Tuesday, December 2, 2014

Remove Power Shell and Server Manager Pinned Icons from Start Menu Server 2012

In Windows Server 2012 and Windows Server 2012 R2 by default all users have a Power Shell icon and a Server Manager icon pinned to their start menu.  When setting up a Remote Desktop Session Host with Windows Server 2012 Remote Desktop Services you may not want users to have the Server Manager icon or Power Shell icon especially when desktop access is enabled.

In previous versions of Windows Server there was a Group Policy setting called "Remove pinned programs list from the Start Menu" which administrators could use to remove these pinned applications from the start menu from Remote Desktop Session Hosts.
This policy no longer works on Windows Server 2012 and there is no well documented method available for doing this on the Internet.
In Windows Server 2012, the three default pinned applications on the start menu are located under the following location:
C:\Users\username\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar
The User Pinned\Taskbar  folder in the Quick Launch directory does not exist in the Default profile in Windows Server 2012 as shown below.

So how do these icons get created in the users profile and where do they come from?

This process appears to be hardcoded into Windows Server 2012 and executed upon creating a new user profile.  The user profile creation process creates Quick Launch\User Pinned\TaskBar folder as part of the login then copies a bunch of "lnk" shortcut files from the "All Users Profile".

Server Manager Pinned Icon

The "Server Manager.lnk" file gets copied by the user profile creation From:

"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools\Server Manager.lnk"


"C:\Users\username\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar\Server Manager.lnk"

Windows PowerShell Pinned Icon

The "Windows PowerShell.lnk" file gets copied by the user profile creation From:

"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\System Tools\Windows PowerShell.lnk"


"C:\Users\username\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar\Windows PowerShell.lnk"

How to Prevent the icons from getting Populated to User Profiles

To stop the "Windows PowerShell.lnk" and "Server Manager.lnk" shortcuts from getting pinned to the start menu on all users, simply delete these icons from the All Users profile under "C:\ProgramData" from the following locations:

"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools\Server Manager.lnk"


"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\System Tools\Windows PowerShell.lnk"

Now when new users login for the first time, the User Profile creation process is unable to copy the "Server Manager.lnk" and "Windows PowerShell.lnk" shortcuts as they no longer exist as shown in the following screenshot.

Sunday, November 30, 2014

Windows Server 2012 R2 Print Managmenet Console - Displays No Printers

There is a bug in Windows Server 2012 R2 which causes Print Management Console to display no printers as shown in the screenshot below.  This makes it impossible to deploy printers with Group Policy using the local server.

Print Management Console does however show drivers for any printers installed.

If you connect to the print server remotely from another print server such as a 2008 R2 server, the printers are displayed in the Print Management console as shown below.

This bug is triggered when Microsoft Hotfix KB2995388 is installed on a 2012 R2 print server which is the case in our scenario as shown in the "systeminfo" extract:
 To resolve the issue, uninstall KB2995388 on the 2012 R2 Print Server and reboot.

Tuesday, November 18, 2014

Distributed File Server Replication on Windows Server 2012 R2 Bug 2951262

I came across a bug with Windows Server 2012 R2 where a spoke server randomly failed to replicate to the hub server.  In total there were three DFSR servers partaking in replication within a single replication group.

The following diagram shows an overview of the topology and the server experiencing issues:

Randomly SPOKE2 stopped replicating to the hub server HUBSERVER and is no longer responding to DFS Health Reports resulting in the health report generation process hanging forever.

When restarting the DFS-R service on SPOKE2, SPOKE2 gets reported to be in an Indeterminate State for approximately 2-3 hours.

 After being in an Indeterminate State for 2-3 hours, the status change to “Auto Recovery” for approximately 6 more hours.  During this time the DFS-R service generates a large amount of disk activity as it goes through and checks all the files.  This can be observed using Windows Resource Monitor.

The Auto Recovery process never completes successfully, nor are there any errors in the event log on SPOKE2.  Rebooting SPOKE2 or restarting the DFS-R service results in the server going back to an indeterminate state for another 2-3 hours then starting the Auto Recovery process again.


This issue is caused by a bug with Windows Server 2012 R2 documented on Microsoft KB 2951262.


You must manually request the Hotfix which Microsoft will email to you and install it on the effected servers.  After installing the hotfix the server will revert to an Indeterminate state then start Auto Recovery again however this time after the Auto Recovery process, it will resume replication as normal.

Public Folder Migration Request with StatusDetail of FailedOther

I was in the process of migrating to Exchange 2013 "Modern Public Folders" for a customer of mine here in Western Australia from an Exchange 2010 SP3 server to Exchange 2013 SP1.  After commencing the migration with the New-PublicFolderMigrationRequest cmdlet the migration request shortly after failed.

Looking at the Migration Request Statistics with the following cmdlet it came up with StatusDetail as FailedOther:

Get-PublicFolderMigrationRequest | Get-PublicFolderMigrationRequestStatistics

 When looking into the error in more detail with

"Get-PublicFolderMigrationRequest | Get-PublicFolderMigrationRequestStatistics | fl"

 The following was logged regarding the error message:

FailureCode: -2146233088
FailureType: DataValidationException
Message: Error: Property expression "Outlook Security Settings" isn't valid.  Valid values are: Strings formed with characters from A to Z (uppercase and lowercase), digits from 0 to 9, !, #, $, &, ', *, +, -, /, =, ?, ^, _, `, {, |, } or ~. One or more periods may be embedded in the alias, but each period should be preceded and followed by at least one of the other characters.

In the public folder structure at my customer, there was folder named "Outlook Security Settings" as per the error message.

This folder name or any of the sub items did not appear to breach any of the invalid characters as per the stated error message.  Luckily this folder was not required by the business and I simply remove the problematic public folder from the Exchange 2010 server and repeated the move request.  After preforming this I was able to successfully complete the public folder migration request.

Monday, November 10, 2014

Problems When Upgrading your domain and forest functional level from 2003 to 2008 R2

A customer of mine upgraded their Domain Functional Level (DFL) and Forest Functional Level (FFL) to Windows Server 2008 R2 yesterday.  Today when employees started work, they experienced lengthy login times, did not receive their network drive mapping to the file server and were unable to connect to Exchange Server 2010 with Microsoft Outlook 2010.

The first thing I did was have a look at the Active Directory replication after the functional level upgrade using the following command "repadmin /showrepl" on one of the Active Directory domain controllers.  This showed the following error:

Last error: -2146892990 (0x80090342)
The encryption type requested is not supported by the KDC

In 2003 functional level the Kerberos Key Distribution Centre (KDC) used either RC4-HMAC 128-bit or DES-CBC-MD5 56-bit for Kerberos Encryption however when moving to 2008 Domain Functional Level (or higher) you upgrade the Key Distribution Centre (KDC) to use Advanced Kerberos Encryption which uses AES 128 and AES 256 encryption.
Generally this transition is smooth and does not cause problems however in this instance the KDC did not detect the functional level change and continued to operate using the legacy 2003 functional level encryption technology.  As a result the error "The encryption type required is not supported by the KDC".
To resolve this problem was very simple, we simply restarted the Kerberos Key Distribution Centre on all of the Active Directory domain controllers in the domain.
Within 5 minutes of the service restart, things were back to normal.