SharePoint 2013 introduced the concept of Host named site collections (HNSC). Actually, that isn't strictly correct, HNSC were also available in SharePoint 2010 but for various reasons, not widely adopted. In its broadest, description HNSC give
the appearance of multiple web applications (allowing the creation of third
level domains like https://hr.myorg.com, https://sales.myorg.com, etc).
This post does not seek to describe in detail the role and configuration of HNSC in SharePoint 2013. Indeed, there are a number of very good blog posts already published which I will include for reference at the end of this post. What this post seeks to highlight is a particular SharePoint bug which I have been unfortunate to fall victim to and hopefully help anyone else who may have faced or be facing the same issue.
Microsoft are pressing very hard to encourage implementer's to make use of HNSC wherever possible and this is absolutely the right way to go, however there may still be a need to create multiple/additional web applications when different
authentication providers or zones are required. In almost all
other cases, the best and recommended practice is to use HNSC and a single web application (3).
When architecting my SharePoint 2013 farm I was keen to embrace the Microsoft recommendations, where possible. It was decided that we would use ADFS as our default authentication provider. Now, when using ADFS the suggested approach is to enable, under Claims Authentication Types, both the ADFS and the Windows authentication providers.
This has the side effect that when a user tries to authenticate, an Authenticate.aspx page (in the SharePoint layouts folder) is presented, asking to select the desired authentication method. This can be solved with a simple HTTP module which checks the user agent of the incoming request. Alternatively, the discussion (4) mentions the possibility of creating a redirect rule inserted on the F5 load balancer. Unfortunately, as these options were not available to me and the authenticate.aspx page was rejected my options were somewhat limited. There is a way to bypass the the multi authentication provider selection page (6) which suggests replacing the login page, however this approach is not supported and does not allow patching/upgrading of SharePoint.
In the end, I decided on creating and extending a web application, setting the default zone to use windows credentials (NTLM, Kerberos) as authentication provider and the Intranet (extended) zone to use ADFS (SAML) as authentication provider thereby negating the need for HTTP modules, F5 load balancer redirects or forcing users to have to select an authentication method via the authenticate.aspx page and still leverage the use of HNSC.
Using ADFS (SAML) as your authentication provider results in some significant changes to People Picker (8) behaviour which need to be addressed (9). I addressed these using the LDAP/AD Claims provider or LDAPCP (10) hosted on CodePlex.
Prior to implementing LDAPCP a people picker query would result in both Default (NTLM authN) and Intranet (ADFS authN) profiles being returned in the results. My assumption having implemented LDAPCP was that when users executed a people picker query in the Intranet zone only ADFS authN profiles would be returned as the NTLM authN was not selected for that particular zone as authentication provider.
This unfortunately proved not to be the case. Further testing proved that the anticipated behaviour resulted in Path Based Site Collections (PBSC) but not in HNSC. Armed with my findings I raised a support case with Microsoft and was put in touch with Microsoft Premier Field Engineer Yvan Duhamel, the author of LDAPCP. Yvan was able to reproduce the issue and confirmed that I was facing a bug with HNSC and extended web applications when using ADFS authN as exclusive authentication provider. Yvan offered a workaround, which I had in fact already implemented (11) and that he would need to go back to product group to ascertain whether a permanent solution would be made available.
Essentially when a people picker query is executed what happens is a call is made to SPWebApplication.Lookup() to get the SPWebApplication object based on the Url of the current HNSC site. The SPWebApplication.Lookup() method is also responsible to set an out parameter of type SPAlternateUrl which returns the Url and the zone with which it is associated. In our scenario the code sets this out parameter SPAlternateUrl to null.
Because it is null, the SPClaimProviderManager.GetClaimProvider() method causes code to call the SPClaimProviderManager.GetClaimProvider() for each zone of the web application instead of only the current zone (that doesn't contain the AD claims provider) and as a result people picker returned the identities for both authentication providers.
I am happy to be able to say that I have had communication from Yvan recently and Microsoft will look to provide a fix in the April 2015 cumulative update (CU).
Encountered this frustrating little nuance a few days ago whilst backing up a number of Path Based Site Collections and restoring them as Host Name Site Collections. Of the six sites I had been working with one of them was returning me the banner message shown below:
"We apologize for any inconvenience, but we've made the site read only while we're making some improvements."
Sites being left in Read-only following a site backup is something I have faced on numerous occasions and so isn't normally something that would cause much consternation. The usual corrective step is to navigate to Site Collection Quotas and Locks, select the affected site and unlock it. Having done so however, I was presented with the following:
Essentially, the option to reset the lockstate was greyed out. Any attempts to unlock using the Set-SPSite cmdlet failed and a check to ascertain whether the content database was set to Read-only confirmed it wasn't. A quick search quickly revealed the following thread: TechNet Forums Site Collection stuck in Read Only mode.
In the thread one victim of the issue had posted the solution offered by Microsoft having opened a support case with them. The solution offered by Microsoft in this case was:
$Admin = new-object Microsoft.SharePoint.Administration.SPSiteAdministration('http://site.collection.com')
With SharePoint 2013, Microsoft introduced the MaintenanceMode property for the Spsite object which indicates the site is undergoing maintenance and is in a Read Only state. This property is set during certain operations, for example when a site collection is being upgraded, backed-up or moved.
If an operation does not terminate cleanly the site may be left in a Read-only state where the MaintenanceMode flag is still set which in turn results in the maintenance banner message being presented.
Microsoft introduced the SpsiteAdministration.ClearMaintenanceMode method as part of the April 2013 CU for SharePoint 2013 which allows this flag to be cleared via the PowerShell command shown above.
From my own perspective, on executing the above command the affected site was unlocked and I was able to continue. It's worth pointing out that the Backup-SPSite and Restore-SPSite cmdlet had both exited cleanly indicating that both backup and restore of the site had been successful and so there were no obvious signs that an error had occurred. A check of the site confirmed that to be the case and it is unclear to me whether the issue occurred during the backup or restore process.
Now there is a very clear lesson to be learnt here and that is the use of the -UseSqlSnapshot.
More Information: SharePoint Sites Backup & -UseSqlSnapshot
As you may or may not be aware, I wrote a SharePoint farm backup script using PowerShell (available for download via Codeplex). In an earlier post, namely SharePoint sites backup & -UseSqlSnapshot I described some of the behaviour when backing up sites via the Backup-SPSite cmdlet and how it locks the site during backup if the -NoSiteLock parameter is not used. This helps prevent corruption or inconsistencies.
SharePoint aware farm backups behave in a very similiar manner. The User Profile Service Application (UPA) has a lot of component parts, 3 databases (Sync, Social and Profile) and two services, User Profile Sync Service (UPS) and (User Profile Service (UP)). In an attempt to ensure consistency, SharePoint tries to 'pause' the User Profile Service Application during the backup. I am sure you don't need me to tell you how the term 'Stuck on Starting' strikes fear in the heart of even the most experienced SharePoint consultant.
The last thing you want, having successfully managed to get the User Profile Service Application working is for something to come along and de-provision/provision it at will. Well, essentially that is exactly what your farm backup does. In short, the backup process de-provisions the UPS during the backup and then once the backup is complete it attempts to re-provision the UPS potentially unravelling all your hard work.
Through experience we have learnt that during the provisioning stage the farm admin account must be a member of the local admins. Additionally, if you run Central Admin on the same server as the UPS (a fairly common occurence) you will also be required to perform an IISRESET after provisioning.
What happens then if you remove your farm admin account from the local admins following the successful provisioning of your UPS? I mean, its not that unusual, quite the reverse in fact, removing your farm admin account from local admins is a recommended practice. Doing so however will have an effect. As previously mentioned, the provisioning process requires the farm admin account to be a local admin and if it isn't then re-provisioning will fail. This manifests itself in the following way:
- User Profile Synchronizations will no longer run.
- Forefront Identity Manager Service and Forefront Identity Manager Synchronization Service will be left in a 'Disabled' state.
Ok so you don't remove farm admin from the local admins group, so what can you expect to happen? Well, its quite likely that your UPS will successfully reprovision. Keep in mind however following the provisioning you will need to ensure that an IISRESET is carried out.
In short, to mitigate the chances of your native SharePoint backup breaking your UPS, you will need to ensure the farm admin account is a local admin on the server running the UPS (I know this won't sit well with some) and you'll need to schedule an IISRESET following the backup.
Whilst working on a recent SharePoint project I was asked what the required SQL Collation/Sort order. Typically, something I would address when I reach that point in the install process. Nonetheless, an important consideration when preparing SQL to receive a SharePoint install. If SQL Server is not already installed, you can configure the SQL collation during the install process. For those of us that don’t have total recall here are the correct SQL collation settings for a SharePoint installation:
- CI – (Case Insensitive) A and a ARE treated as the same character.
- AS – (Accent Sensitive) a and á are NOT treated as the same character.
- KS – (Kana Sensitive) Japanese Hirakana and Katakana characters which look the same are NOT treated as the same character.
- WS – (Width Sensitive) Single-Byte and Double-Byte versions of the same character are NOT treated as the same character.
Note: The SQL Server database collation must be configured for case-insensitive, accent-sensitive, Kana-sensitive, and width-sensitive. This is to ensure file name uniqueness consistent with the Windows operating system. These settings apply to both SharePoint 2010 and SharePoint 2013
If you do not set the correct SQL Collation settings at the time of install then you will not be able to pre-provision your SharePoint databases. The database will automatically inherit the SQL Collation settings stipulated during install hence the need to get it right up front. Databases created from within SharePoint will however create the databases using the correct Collation settings.
Changing the Collation Settings
If SQL Server is already installed with different collation settings then it is still possible to change the collation but not without a degree of work involved. This involves rebuilding the master database to change the collation. This is achieved by performing the following steps:
- Take a full database backup of all databases (Use SQL Server Management Studio)
- Save all the security logins and other information needed to recreate the users (Username, Password and Server Roles).
- Take all databases offline.
- Insert the SQL installation CD in the CD Drive.
- Launch a command window.
- At the prompt type the following:
Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=InstanceName /SQLSYSADMINACCOUNTS=accounts /[ SAPWD= StrongPassword ] /SQLCOLLATION=Latin1_General_CI_AS_KS_WS
- Restore all the content databases (Use SQL Server Management Studio)
- Recreate all security logins.
More Information: Set or Change the Server Collation https://msdn.microsoft.com/en-us/library/ms179254.aspx
More Information: Install SQL Server from Command Prompt https://msdn.microsoft.com/en-us/library/ms144259(v=sql.110).aspx
Is the name of the database instance. If an instance wasn’t specified during the initial installation of SQL Server, then the default instance name is used. To determine the instances defined, do the following:
- Launch Regedit
- Navigate to HKEY_LOCAL_MACHINE > Software > Microsoft > Microsoft SQL Server > Instance Names > SQL
- All instance names are listed along with their values.
When stipulating the password for the sa user, reusing the same sa user password set during the original install is perfectly acceptable. The sa password is mandatory.
In short, ensuring the collation is set to Latin1_General_CI_AS_KS_WS when installing SQL for SharePoint will save a lot of work at a later stage. If however you are installing SharePoint using an existing SQL backend, then you really should consider placing it in its own SQL instance.
Whilst working on my SharePoint farm backup script using PowerShell SharePoint farm backup script using PowerShell I found that I was having to put in place more and more checks to ensure the script was robust enough. In fact a single parameter used with the Backup-SPSite cmdlet, the -UseSqlSnapshot forced me to write no less than three methods to ensure that the usage of this switch was an option as there are a number of caveats that you must cater for to facilitate the use of the -UseSqlSnapshot. Of course if your farm is patched with the latest Service Pack/Cumulative Update and your software versions are current then these caveats are less likely to impact you.
Microsoft, and indeed someone who, when it comes to backup and restore I would always listen to, Sean McDonough recommends using the -UseSqlSnapshot
parameter when backing up site collections. See for yourself: The One Thing: Sean McDonough and SharePoint 2010
Great!! So next time I plan to execute the Backup-SPSite cmdlet all I need to ensure is that I remember to add the -UseSqlSnapshot parameter and we are golden right? Well, not quite. Using this parameter is not as straight forward as you think.
Ordinarily, when backing up sites with the Backup-SPSite cmdlet the cmdlet will lock the site. Locking the site during backup means the backup operation reads/writes are prevented and users are unable to access the site. This is the default behaviour. If you want to prevent locking sites during the backup you need to specify the -NoSiteLock parameter. Of course there is the risk that this can cause corruption.
The Backup-SPSite cmdlet 'records' the lock state of the site and then locks the site for backup (unless the -NoSiteLock parameter has been set). Once the backup completes the site is then set back to the lockstate that was 'recorded' when the backup started. So for example, if the site was set to -NoAccess and the -NoSiteLock parameter was not set the Backup-SPSite cmdlet will lock the site, complete the backup and then return the site back to -NoAccess. I found that if there is an issue during the site backup then the site could potentially be left in a 'locked' state.
To guard again this the Backup-SPSite cmdlet includes the option to use the -UseSqlSnapshot parameter which allows the user to continue normal use of the site collection whilst it is being backed up. The -UseSqlSnapshot parameter changes the backup behaviour and the method it uses to back up the site.
In short, a SQL Snapshot is made, the site is backed up, the Snapshot deleted and all of this is done with the site remaining in an unlocked state allowing users to continue to work oblivious to the fact the site is being backed up. So why aren't we all using the -UseSqlSnapshot parameter then? Again, as I say. It's not that simple.
The usage of the -UseSqlSnapshot parameter has a number of caveats. Firstly, the version of SQL required is SQL Server 2008 (Enterprise Edition) with Service Pack 1 and Cumulative Update 2 or higher. The key words here being: ENTERPRISE EDITION because the -UseSqlSnapshot parameter requires the Enterprise edition of SQL.
Secondly, Microsoft has highlighted an issue where any farm with SP1 not applied will incur the following error:
Backup-SPSite : Operation is not valid due to the current state of the object.
At line:1 char:14+ Backup-SPSite <<<< http://site -Path + CategoryInfo : NotSpecified: (:) [Backup-SPSite], InvalidOperationException + FullyQualifiedErrorId : System.InvalidOperationException,Microsoft.SharePoint.PowerShell.SPCmdletBackupSite\\yourpath
This does not mean that site backup has failed, quite the opposite, the site backup does in fact succeed. However, until the application of SP1 this error will continue to occur.
Thirdly, Microsoft also state that if the RBS provider that you are using does not support snapshots, you cannot use snapshots for content deployment or backup. So if you are using the SQL FILESTREAM provider unfortunately you will not be able use the -UseSqlSnapshot parameter because the SQL FILESTREAM provider does not support snapshots.
Whilst Microsoft recommend the use of the -UseSqlSnapshot parameter as the preferred option when using the Backup-SPSite cmdlet there are three requirements that need to be met in order to allow usage. These are:
- Farm must be running SQL Server 2008 (Enterprise Edition) with Service Pack 1 and Cumulative Update 2 or higher.
- To prevent the error 'Operation is not valid due to the current state of the object.' Service Pack 1 must be applied to the farm.
- The SQL FILESTREAM provider cannot be used in conjunction with the -UseSqlSnapshot parameter as the SQL FILESTREAM provider does not support snapshots.
I was receiving a number of EventID 3760 in the application logs. Now I refer you back to an earlier blog post that I wrote explaining how to solve the EventID 7888 & EventID 3760 errors in your application logs.
Event ID 3760
Source: Microsoft-SharePoint Products-SharePoint Foundation
Date: 07/03/2012 08:00:02
Event ID: 3760
Task Category: Database
SQL Database ‘SP_PortalContent_DB’ on SQL Server instance ‘SHPSQL’ not found. Additional error information from SQL Server is included below.
Cannot open database “SP_PortalContent_DB” requested by the login. The login failed.
Login failed for user ‘TESTLAB\farmadmin’.
This was a similiar issue but in this particular case the Content database had gone from SQL and was no longer visible in the Manage content databases page of Central Admin. It was easy to assume that in fact the previous deletion operation had been successful. This however, was not the case.
Get a list of all Content Database
To obtain a list of all content database launch the SharePoint 2010 management shell and type the following:
Get-SPContentDatabase | Select Name, ID
This will return all your content database(s) and associated ID. See below:
As you can see in the image below, the offending database SP_PortalContent_DB is still listed and it is this 'Zombie' content database that is causing the EventID 3760.
Remove a Content Database using PowerShell.
To remove the offending content database launch the SharePoint 2010 management shell and type the following:
Remove-SPContentDatabase -Identity "8702d890-db14-40e9-b720-fe5fefee0134"You will then be presented with two prompts which you should accept/confirm by typing: Y in both cases.
I am sure anyone using SharePoint 2010 will have encountered the SharePoint 2010: Error 7043 "Load control template file /_controltemplates/TaxonomyPicker.ascx failed" issue. There are countless blog entries available as well as a Microsoft knowledge base article KB2481844 explaining how to solve this.
Brian Lalancette has also published a small script that will correct this which can be found here: Fix for 'Load control template... TaxonomyPicker.ascx".
I however, came across another issue which also threw the SharePoint 2010: Error 7043. The entries for these three were slight different. These were:
- Load control template file /_controltemplates/EawfDocLibTemplates.ascx failed: The resource object with key 'EawfDocLibDisplayFormOpenScopeLabel' was not found.
- Load control template file /_controltemplates/EGOrganizationItemSelector.ascx failed: The resource object with key 'ItemPicker_AddButton_Text' was not found.
- Load control template file /_controltemplates/EGOrganizationMemberSelector.ascx failed: The resource object with key 'MemberSelector_AddLeaderButton_Text' was not found.
I checked the 14 Hive and the aforementioned files were in fact present. A search returned nothing meaningful in relation to these errors. So, how did I solve these errors? Well, I re-ran the SharePoint 2010 Products Configuration Wizard and on completion these errors were gone, never to return.
Hope this helps.
I was getting a number of errors in the Event logs. I had knowingly deleted these databases previously as part of a site deletion, using Central Administration which had executed and completed successfully, or so i thought.
Event ID 7888
Source : Office Sharepoint Server
Category : Office Server General
Event ID : 7888
A runtime exception was detected. Details follow.
Message: Cannot open database “DB Name” requested by the login. The login failed.
Login failed for user ‘NT AUTHORITY\SYSTEM’.
Event ID 3760
Source : Windows Sharepoint Services 3
Category : Database
Event ID : 3760
SQL Database 'DB Name' on SQL Server instance 'Server Instance' not found. Additional error information from SQL Server is included below.
Cannot open database ” requested by the login. The login failed.
Login failed for user ‘NT AUTHORITY\SYSTEM’.
Unusually, the databases had remained visible in the Central Administration and were no longer present in SQL Management Studio following the deletion. To remedy this you must remove the offending content database from within the Central Admin UI.
Remove the content database via Central Admin UI
- Launch Central Admin.
- Select Application Management.
- Select Manage Content Databases.
- Select the offending database.
- Check the Remove Content Database checkbox.
- Acknowledge the warning and press OK.
A strange occurence but a very straight forward solution.
There is a relatively new term being used by Microsoft Consulting Services, the Microsoft Product Line Architecture, or to give it its short name, PLA. If you haven’t heard of it by now, you certainly will in the coming months. This two part blog will present some of the things you can expect to hear about in the near future, especially if you engage directly with Microsoft.
What is the SharePoint 2013 Product Line Architecture?
Available from March 2013, the Product Line Architecture is the culmination of years of Microsoft Consulting Services project deliveries, Microsoft Premier Support expertise and feedback from the Exchange, Lync and SharePoint Product Groups, fused into a prescribed set of rules and models. The end goal being, a product deployment that ensures a predictable and high quality implementation that offers supportability, availability, capacity and workload portability.
Ok, enough with the marketing speak, let’s cut to the chase. By implementing your shiny new on-premise SharePoint 2013 farm as per the prescribed set of rules and models laid down, you are basically ensuring your implementation is cloud compatible. Client motivations for moving to cloud computing have changed considerably and with one eye on the future, its Microsoft’s way of ensuring that the almost inevitable migration of your Exchange, Lync and SharePoint implementation to the cloud will be as straightforward as possible.
If nothing else, my years of working with SharePoint have taught me one thing, SharePoint farms are like fingerprints. You will rarely, if ever find an identical SharePoint farm. The variations in guidance when architecting, planning & designing SharePoint environments and the near infinite number of configuration options has resulted in almost limitless variations of the SharePoint farm. In SharePoint 2010 we had the Service farm and Collaboration farm. With SharePoint 2013 and the PLA we go back to Services and Content in a single farm, just one of the changes implementers of the PLA can expect.
The capability of network providers to offer high speed internet access, virtually anywhere, has made the cloud a viable proposition for business use as well as consumer adoption. Its no longer, "The cloud is the future".The future is already here. Clearly, Microsoft had to address the multiplicity if for no other reason than to simplify your businesses eventual move to cloud.
So the PLA is a good idea, right?
No more ambiguity. We get standardisation, commonality & greatly reduced complexity. Essentially, everybody sings from the same song sheet. The Manager in me says this is something SharePoint implementations have been crying out for, for some time. The Architect in me says, damn, they just took a sizeable slice of the fun out of my job.
How does the PLA work?
From a Microsoft perspective they would prefer your shiny new SharePoint 2013 deployment to be a SharePoint Online deployment. If that’s not feasible then a PLA deployment is the recommended alternative. Finally, if the PLA service description and design does not meet current business requirements then where possible Microsoft will encourage customers to leverage as much of the PLA guidance as is viable in a custom implementation. Basically, Microsoft will prescribe a service description, a set of service level objectives, a set of functional specifications, an operational plan & test plan. You are then required to implement them as stipulated.
What features does a PLA compliant SharePoint 2013 implementation provide?
Having decided to adopt the PLA there are some features that will NOT be available. These features are highlighted below in red (correct at the time of writing).
What is the PLA Ruleset?
The PLA ruleset is a collection, or set of rules with specific requirements for the infrastructure and supporting technologies, i.e. Virtualization, Topology, Directory Services, Network, etc. In short, it is the complete description of the required infrastructure, configuration and settings to deploy a PLA complaint SharePoint farm. Rules are split into two categories, required and recommended. SharePoint 2013 implementations not adhering to the PLA ruleset are not considered PLA compliant and will not be certified as such. The idea being that a PLA compliant SharePoint 2013 implementation ensures that the farm will perform in an anticipated manner. Whilst required rules must be adhered to they may still be deviated from based on specific customer requirements.
In all honesty, if you apply the SharePoint best practices and adhere to Microsoft recommendations in your current environment then the vast majority of rules will already be known to you, whilst the majority of remaining rules apply to the new features available in SharePoint 2013 for example: Rule 11: Do not use SharePoint resource management in place of a load balancer.
So there you have it. A brief introduction to the new Microsoft Production Line Architecture. Undoubtedly, the PLA provides implementers with clear guidance on how best to deploy their new SharePoint 2013 farm and this has been lacking to some degree in previous versions. I for one welcome this. From a business perspective the adoption of the PLA should hopefully realise financial benefits from minimising failures and downtime and a reduction in deployment costs and administrative overhead.
However, if I wanted to be cynical, I would say this is part of Microsoft’s grand plan to make your businesses transition from on-premise to cloud as smooth as possible and leverage this when the technology refresh discussions start and the inevitable cloud deliberations begin again.
In my next post I will elaborate on my experiences with the Microsoft Production Line Architecture and its implementation.