Thursday, December 26, 2013

Setting your SharePoint Farm in Read-Only mode

Listed below are steps to put SharePoint Farm into Read Only mode. You might want to do this activity at the time of cut-over to a new version of SharePoint like 2007 to 2010/2013 upgrade or performing a maintenance activity on the servers to block user traffic etc. Its advisable to put only SharePoint Content databases in Read Only mode during such a cut-over activity, since you might still want other services like User Profile, Search, Managed Metadata, Excel services etc to be operational in the farm.

1. Put all the SharePoint Servers (App, WFE, Search servers) in Maintenance Mode using SCOM/Spectrum

2. Unschedule Search Crawls & User Profile Synch jobs - Do not stop On-going Search Crawls. Let it complete successfully and then un-schedule it. You may choose to keep Search Crawls unscheduled, as there will not be any new content added during Read-Only mode.

3. Unschedule all Windows Task Scheduler Jobs (related to SharePoint). Again do not stop the on-going scheduled jobs and let them complete successfully prior to un-scheduling them.

4. Disable the following Timer Jobs: (You can develop a PowerShell script to programmatically disable the following timer jobs, if they are enabled at webapps. In my case, I did develop a script to disable timer jobs)
Bulk workflow task processing
Change Log
Database Statistics
Dead Site Delete
Disk Quota Warning
Expiration policy
Hold Processing and Reporting
Immediate Alerts
Information management policy
Profile Synchronization
Quick Profile Synchronization
Records Center Processing
Recycle Bin
Scheduled Approval
Scheduled Page Review
Scheduled Unpublish
Search and Process
Shared Services Provider Synchronizing Job
Site Collection: Delete
Usage Analysis
Variations Propagate Page Job Definition
Variations Propagate Site Job Definition
Windows SharePoint Services Watson Policy Update
Workflow
Workflow Auto Cleanup
Workflow Failover
Nintex Workflow Scheduler (if you have Nintex Workflows installed)
Any third party timer jobs related to SharePoint

5. Stop SharePoint 2010 Applications
 @echo Stopping services...
NET STOP SPAdminV4
NET STOP SPTimerV4
NET STOP SPTraceV4
NET STOP SPUserCodeV4
NET STOP SPWriterV4
NET STOP SPSearch4
NET STOP OSearch14
NET STOP "IIS Admin Service"
NET STOP w3svc
NET STOP smtpsvc
@pause

6. DBA to put SharePoint Content in Read-Only mode. A list of Content DBs associated to several site collections can be identified using the following STSADM command:
stsadm.exe -o enumsites -url {WebApp URL} > ".\Report\webappname-$(Get-Date -Format MM-dd-yyyy-HH-mm).xml"

You might also have third party components like Nintex, DocWay etc, databases related to these components should also be put in Read-Only mode.

7. Start SharePoint 2010 Application (Run the Start batch script)
 @echo Starting services...
NET START SPAdminV4
NET START SPTimerV4
NET START SPTraceV4
NET START SPUserCodeV4
NET START SPWriterV4
NET START SPSearch4
NET START OSearch14
NET START "IIS Admin Service"
NET START w3svc
NET START smtpsvc
@pause

8. Disable Kerberos authentication from Default zones/Custom zones

9. Take Servers out of maintenance mode using SCOM/Spectrum

10. Enable User Profile Synch Job and Search Crawls for User Profiles (sps3:mysites)
This would ensure that newly added users during the Read Only period are still available in the Profile and you can access them, if you have custom People Search that could directly access User Profile store.
To ensure newly added users have access to the SharePoint farm for your public facing sites, you would want to have All Authenticated Users permissions set for those intranet/internet facing sites.

11.Smoke Test (Ad-hoc testing on Read Only scenarios)

At the end, you can put all these steps in your Quality center site and run them as Test cases and make it a standardized process.

Next part will cover, Rollback steps and PowerShell script to Disable/Enable Timer jobs

Friday, July 19, 2013

Powershell script to print All User Profiles properties/values - SharePoint User Profile Store

This script prints all User Profile properties/values available inside SharePoint User Profile store.

1.  Loops through all site collection to save usernames to a hash table
2.  Loops through the UserProfiles and output user profile values to log file

Script Usage: .GatherMySiteProfileInfo.ps1 -farm [dev|test|prod] -log

Copy Powershell script as shown below and replace parameters marked in red with proper values:
-----------------------------------------------------------------------------------------------
param($farm, $log)

if($farm -ne $null)
{
    switch($farm)
    {
       "prod" {$SSPName="Enter Prod SSPName"
               $MySiteURL = "https://mysite.sharepointfix.com/"}
       "test" {$SSPName="Enter Test SSPName"
               $MySiteURL = "https://mysite-test.sharepointfix.com/"}
    "dev" {$SSPName="Enter Dev SSPName"
              $MySiteURL = "https://mysite-dev.sharepointfix.com/"}
        default {"`nfarm incorrect, SYNTAX: .GatherMySiteInfo.ps1 -farm [dev|test|prod] -log `n"; exit}
    }
}

#Load the SharePoint assemblies
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Server")| out-null
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Server.UserProfiles")| out-null
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")| out-null

# Setup the UserProfileManager object
try
{
    $ServerContext = [Microsoft.Office.Server.ServerContext]::GetContext($SSPName)
    $UPManager = new-object Microsoft.Office.Server.UserProfiles.UserProfileManager($ServerContext);
}
catch
{
    Write-Host "Can't access User Profile Manager"
    exit;
}

$logName  = "." + "AllUserProfiles" + "-" + $(Get-Date -Format MM-dd-yyyy-HH-mm) + ".csv"

# Get an enumerator and loop through all the profiles
$enumProfiles = $UPManager.GetEnumerator()

$i = 0
$per = 0
$profileCount = $UPManager.Count;

foreach ($up in $enumProfiles)
{
  #Get all User Profile Property values. You can define your own properties as well
  [String]$userName = $up.Item("Accountname") #example: amjsaito, Needs to be a string so we can parse
  [String]$employeeType = $up.Item("employeetype")
  [String]$orgStatus = $up.Item("organizationalStatus")
  [String]$mgmtCtr = $up.Item("MgtCenterName")
  [String]$costCenName = $up.Item("CostCenterName")
  [String]$costCenNum = $up.Item("CostCenterNumber")
  [String]$building = $up.Item("Building")
  [String]$locationCode = $up.Item("LocationCode")
  [String]$aboutme = $up.Item("AboutMe")
  [String]$resp = $up.Item("SPS-Responsibility")
  [String]$skills = $up.Item("SPS-Skills")
  [String]$projects = $up.Item("SPS-PastProjects")
  [String]$memberships = $up.Item("ProfessionalMemberships")
  [String]$schools = $up.Item("SPS-School")
  [String]$interests = $up.Item("SPS-Interests")
  [String]$pictureURL = $up.Item("PictureURL")
  [String]$myFunction = $up.Item("MyFunction")
  [bool]$hasMySite = $false

  #strip off the domain name
  $URLBreak = $userName.LastIndexOf('');
  $user = $userName.SubString($URLBreak+1);

  Write-Output "`"$username`",`"$employeeType`",`"$orgStatus`",`"$mgmtCtr`",`"$costCenName`",`"$costCenNum`",`"$building`",`"$locationCode`",`"$hasMySite`", `"$aboutme`",`"$resp`",`"$skills`",`"$projects`",`"$memberships`",`"$schools`",`"$interests`",`"$pictureURL`",`"$myFunction`"" | Out-File $logname -append

  $i++

  #if ($i -lt $profileCount) {
    $per = ($i/$profileCount) * 100;
  #}

   $per = "{0:N2}" -f $per
   Write-Progress -Activity "Print All User Profiles" -PercentComplete $per -CurrentOperation "$per% complete" -Status "Looping Through User Profiles"
 }

Saturday, March 2, 2013

Powershell script to report all SharePoint Farm Feature Definitions in a Grid View

The PowerShell script below generates a Grid View report on all SharePoint Features installed within a Farm.

Copy the script below and paste it into a .ps1 file.
==================================================================

Add-PsSnapin Microsoft.SharePoint.PowerShell

## SharePoint DLL
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")

## Creating SharePoint Farm
$farm = [Microsoft.SharePoint.Administration.SPFarm]::Local
$farm.FeatureDefinitions | Select ID, DisplayName, RootDirectory | Out-GridView

Remove-PsSnapin Microsoft.SharePoint.PowerShell
==================================================================

Alternatively, you can also generate a CSV report, based on the above script.

Wednesday, February 27, 2013

Tuning Indexers, Crawlers & Query servers in SharePoint 2007 & 2010 to achieve Redundancy, Fault Tolerance and Maximize Search Performance

Here are some key concepts for Indexers, Crawlers & Query servers in different versions of SharePoint. (SharePoint 2007 and SharePoint 2010) to achieve redundancy, fault tolerance and maximize the Search performance.

SharePoint 2007 Index & Query Servers: 
There can be only 1 dedicated Index server configured per Shared Services Provider (SSP) associated with a SharePoint Web Application. Hence, Index servers cannot be made redundant, but you can scale them per SSP. This gives it the role of building and storing the index.

The query role does not have to be on your index server.  It's good to have the Web Front ends play the role of a query server so that the searches are fast (queries itself locally) and for some redundancy, since index servers cannot be made redundant.  What this does is it tells the index server to propagate its index to the WFE's that are set as query servers so that they have a local copy of the index. Then, when someone does a search (this is done on the WFE), then that WFE will search itself locally instead of going across the network to query the index server.  This increases speed at the time of query, but it of course introduces additional overhead in terms of having multiple full copies of the index on the network and the network demand of propagating those index copies all the time.

If the index server goes down for some reason, WFE's still have a local copy of the index for allowing searches with current content - they just don't get refreshed until the index server comes back online.

The crawl server (or servers) is the WFE that the indexer uses for crawling content. You can choose to make your index server a WFE that isn't part of your load balancing and then set itself as the dedicated crawler.  What this does is allows the indexer to crawl itself, which does two things: avoid the network traffic of building the index across the network and eliminates the crawling load on the WFEs. Since your index server becomes an out-of-rotation WFE for regular browsing, you can actually use it to
host your Central Admin and SSP web apps, which again reduces load/overhead on the content WFEs.

But if you put Query on the Index server, then queries have to go all the way from the WFE to the Index server and back, which can cause a performance hit. Acting as a Query server will compete with the very intense indexing process if they're on the same box.

Reference: Above are excerpts from Social Technet Forum: http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/f775c95d-4bec-450d-a56c-5114a0f52c0a

SharePoint 2010 Enhancements:
Architecture in SharePoint 2010 is flexible. You can configure Multiple Crawlers, Indexers and Query Components.

Crawl Component –  It is commonly referred to as the crawling component or indexer. Crawl component is hosted on an Index server and its primary responsibility is to build indexes. Unlike the previous version of SharePoint the crawl component is stateless; meaning the index that is created is not actually stored in the crawl component. The index is propagated to the appropriate query server. The crawl component runs within MSSearch.exe process and the MSSearch.exe process is a windows service “SharePoint Server Search 14”.  

Crawl Database – As you just learned, the crawling component itself is stateless.
State is actually managed in the crawl database which will track what needs to be crawled and what has been crawled. When a crawler component is provisioned, it requires a mapping to a SQL crawl database.   Both of these can be created by using either Central Administrator or PowerShell.

A crawl component can only map to one SQL Crawl database. Multiple crawl components can map to the same Crawl database. By having multiple crawl components mapped to the same crawl database, fault tolerance is achieved. If the Index server hosting crawl component 1 crashes, crawl component 2 picks up the additional load while 1 is down. If the server hosting crawl component 1 crashes, crawl component 2 picks up the additional load while 1 is down. Performance is improved in this setup because you effectively now have two indexers crawling the content instead of one. If you’re not satisfied with crawl times, simply add an additional crawl component mapped to the same crawl DB. The load is distributed across both index servers.

Indexers – Indexers are“Server(s) hosting a crawl component(s)” associated to that crawl database that is responsible to crawl that host or Content Sources associated with the Search Service Application. When multiple crawl databases exist, an attempt is made to distribute these host entries or Content sources evenly. Index is no longer a single point of failure and is stored on Query servers. The Query component holds the entire index or partition of an index.

Query Component – This is the component that will perform a search against an index created by the crawler component. It is also commonly referred to as the query server. A Query Server is a server that runs one or more Query Components. These servers hold a full or partial of the search index. Query Servers are now the sole owner of storing the index on the file system. As stated above, the indexer crawls content and builds a temporary index.The Indexer propagates portions of the temporary index over to Query Server to be indexed. Query Servers contain a copy of the entire or partial index referred to as an Index Partition.

In previous builds of SharePoint, every query server stored the entire index. While this achieved fault tolerance it didn’t help with performance. There is a direct correlation between the size of an index and query latency. The size of an index can easily become a bottleneck for query performance.

Index Partition – Is a new feature of SharePoint 2010 and is directly correlated to the query component.
We now have the ability to break the indexes into multiple partitions to improve the amount of time it takes to perform a search by the query component. For every query component there will be a single index partition that is queried by the query component. Another way of putting it is, every time a query component is created, another index partition is created. By creating additional query components, a new index partition is created that owns a portion of the index.

By partitioning large indexes, query times are reduced and a solution to this type of bottleneck can be solved. Partitioning an index is as simple as provisioning new Query Components from the Search Application Topology section in Central Administrator. The crawler evenly distributes crawled content to Index Partitions using a hash algorithm based on Doc ID’s.

Index Partition Mirror – There is a new capability to create mirrors of the index partitions.
These mirrors again provide the ability to provide fault tolerance. It’s highly recommended to create fault tolerance with your index. This is accomplished by mirroring a Query component assigned to a different server. Under the Search Application Topology, you can simply select the Query Component and Add mirror:

Property Database – Stores metadata and security information items in the index.
The property database will be associated to one or more query components and is user as part of the query process. These properties will be populated as part of the crawling process which creates the index.

Just like Query components, Property Store DB can be scaled out and share the load of the metadata stored in the Property Store DB. If the Property Store DB becomes a bottleneck due to the size of the database and\or strains the disk subsystem with high I/O latency on the back end, a new Property Store DB can be provisioned to share the load.  Just like the Crawl DB, the Property Store DB is useless unless it’s mapped to something.  In this case, a Property Store DB must be mapped to a Query component. If a decision is made to provision an additional Property Store DB to boost performance, an additional non-mirrored Query Component must be provisioned and mapped to it.


Query Processor – Property Store DB and Query component scale out is only half of the battle. The Query Processor remains and still plays a vital role in Search 2010. The Query processor is responsible for processing a Query and runs under w3wp.exe process.  It retrieves results from Property Store DB and the Index\Query Components. Once results are retrieved, they are packaged\security trimmed and delivered back to the requester which is the WFE that initiated the request.  The Query Processor will load balance request if more than one Query Component (mirrored) exists within the same Index Partition.  The exception to this rule is if one of the Query Component’s is marked as fail over only.

Just like the Query Component and Property Store DB, the Query Processor role can be scaled out to multiple servers.

Wednesday, January 16, 2013

Powershell script to get SharePoint Workflow History List Items Count

The Powershell script below generates a CSV report that counts the number of SharePoint Workflow History List Items for all site collections and sub-sites within a web application. It also displays a progress bar as you loop through multiple sites/sub-sites and gives you real time processing information.

Copy the PowerShell script below and paste it in a notepad and save it with a .ps1 extension in any of your local drives.
=====================================================================

param
(
   $url
)

Add-PSSnapin Microsoft.SharePoint.PowerShell -ea SilentlyContinue

## SharePoint DLL
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")

if(![string]::IsNullOrEmpty($url))
{
        try
        {
        $webAppURI = New-Object Uri($url)
        $spwebapp = [Microsoft.SharePoint.Administration.SPWebApplication]::Lookup($webAppURI)
        }
        catch [System.Exception]
        {
        Write-Warning "Web Application could not be found on the server. Check Webapplication URL before executing this script."
        exit;
        }
       
        $spbasetypegenericlist=[Microsoft.SharePoint.SPBaseType]::GenericList

        $output = @()
        $heading = "Site Collection URL; Site URL; Site Name; List Title; List URL; List Item Count"
        $filename = "." + $spwebapp.Name + "-" + $(Get-Date -Format MM-dd-yyyy-HH-mm) + ".csv"
       
        #Write CSV file Headers
        Out-File -FilePath $filename -InputObject $heading;
       
         #Log file name
        $logName = "." + $spwebapp.Name + "-" + $(Get-Date -Format MM-dd-yyyy-HH-mm) + ".log"
        #Write-Output "Site Collection URL; Site URL; Site Name; List Title; List URL; List Item Count" | Out-File $logname
     
        foreach ($spsite in $spwebapp.Sites)
        {
            try
             {
                #Progress Bar
                $sitesProcessed = 0;
                $siteMax = $spsite.AllWebs.Count;

                foreach ($spweb in $spsite.AllWebs)
            {
                    #Display Progress Bar on Site Completion
                    $sitesProcessed++;
                    $percent = ($sitesProcessed/$siteMax) * 100
                    Write-Progress -Activity "Looping through Sites" -PercentComplete $percent -CurrentOperation "$sitesProcessed / $siteMax" -Status "$spweb.Title"

                    try
                    {
                        $spgenericlists = $spweb.getlistsoftype($spbasetypegenericlist)
                   
                        if ($spgenericlists -ne $null)
                        {
                    foreach ($list in $spgenericlists)
                    {
                    if($list -ne $null)
                                {
                                    if ($list.basetemplate -eq "WorkflowHistory")
                        {
                        $output += $($spsite.Url + ";" + $spweb.Url + ";" + $spweb.Title  + ";" + $list.Title  + ";" + $list.DefaultViewUrl + ";" + $list.ItemCount)
                        }
                                }
                    }
                        }
                    }
                    catch
                    {
                        Write-Host "Exception thrown at" $spsite.Url $spweb.Url $list.Title
                        Write-Error ("Exception thrown at:" + $_)
                       
                        #Write Exception to Log file
                        Write-Output "Exception thrown at" $spweb.Url $list.Title $list.DefaultViewUrl $_ | Out-File $logname -append
                    }
                   
            $spweb.Dispose()
                 }
              }
              catch [System.Exception]
              {
                  Write-Host "Exception at" $spsite.Url $spweb.Url $list.Title
                  Write-Warning ("Exception thrown at: " + $_)
                 
                  #Write Exception to Log file
                  Write-Output "Exception thrown at" $spweb.Url $list.Title $list.DefaultViewUrl $_ | Out-File $logname -append
              }
              finally
              {
                  $spsite.Dispose()
              }
        }
       
        if($output -ne $null)
        {
            $output | Out-File $filename -Append
            Write-Host "Output file has been created successfully."
           
            #Write-Output $output | Out-File $logname -append
        }
        else
        {
            Write-Warning "Error creating the CSV file."
           
            #Write Exception to Log file
            Write-Output "Error creating the CSV file" | Out-File $logname -append
            exit
        }
}
else
{
    Write-Warning "Web Application URL parameter cannot be blank."
    Write-Warning("Use Syntax: .GetAllWFHistoryItemCount.ps1 -url <Your Web App URL>")
exit
}

Write-Host "Finished"

==============================================================================

Automate the above .ps1 script as a batch utility, Copy and paste code below and save it with a .bat file extension, change the script file name and enter your web application URL, in the highlighted yellow section, save and run the automated batch file.

cd /d %~dp0
powershell -noexit -file ".GetWorkflowHistoryListItem.ps1-url "https://sharepointfix.com "%CD%"
pause

Run the batch file and import the CSV file generated into an Excel sheet. Delimit the columns with a "*" and then check the Count of Workflow History List Items for each of your Site Collection and Sub sites within.