Saturday, March 2, 2013

Powershell script to report all SharePoint Farm Feature Definitions in a Grid View

The PowerShell script below generates a Grid View report on all SharePoint Features installed within a Farm.

Copy the script below and paste it into a .ps1 file.
==================================================================

Add-PsSnapin Microsoft.SharePoint.PowerShell

## SharePoint DLL
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")

## Creating SharePoint Farm
$farm = [Microsoft.SharePoint.Administration.SPFarm]::Local
$farm.FeatureDefinitions | Select ID, DisplayName, RootDirectory | Out-GridView

Remove-PsSnapin Microsoft.SharePoint.PowerShell
==================================================================

Alternatively, you can also generate a CSV report, based on the above script.

Wednesday, February 27, 2013

Tuning Indexers, Crawlers & Query servers in SharePoint 2007 & 2010 to achieve Redundancy, Fault Tolerance and Maximize Search Performance

Here are some key concepts for Indexers, Crawlers & Query servers in different versions of SharePoint. (SharePoint 2007 and SharePoint 2010) to achieve redundancy, fault tolerance and maximize the Search performance.

SharePoint 2007 Index & Query Servers: 
There can be only 1 dedicated Index server configured per Shared Services Provider (SSP) associated with a SharePoint Web Application. Hence, Index servers cannot be made redundant, but you can scale them per SSP. This gives it the role of building and storing the index.

The query role does not have to be on your index server.  It's good to have the Web Front ends play the role of a query server so that the searches are fast (queries itself locally) and for some redundancy, since index servers cannot be made redundant.  What this does is it tells the index server to propagate its index to the WFE's that are set as query servers so that they have a local copy of the index. Then, when someone does a search (this is done on the WFE), then that WFE will search itself locally instead of going across the network to query the index server.  This increases speed at the time of query, but it of course introduces additional overhead in terms of having multiple full copies of the index on the network and the network demand of propagating those index copies all the time.

If the index server goes down for some reason, WFE's still have a local copy of the index for allowing searches with current content - they just don't get refreshed until the index server comes back online.

The crawl server (or servers) is the WFE that the indexer uses for crawling content. You can choose to make your index server a WFE that isn't part of your load balancing and then set itself as the dedicated crawler.  What this does is allows the indexer to crawl itself, which does two things: avoid the network traffic of building the index across the network and eliminates the crawling load on the WFEs. Since your index server becomes an out-of-rotation WFE for regular browsing, you can actually use it to
host your Central Admin and SSP web apps, which again reduces load/overhead on the content WFEs.

But if you put Query on the Index server, then queries have to go all the way from the WFE to the Index server and back, which can cause a performance hit. Acting as a Query server will compete with the very intense indexing process if they're on the same box.

Reference: Above are excerpts from Social Technet Forum: http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/f775c95d-4bec-450d-a56c-5114a0f52c0a

SharePoint 2010 Enhancements:
Architecture in SharePoint 2010 is flexible. You can configure Multiple Crawlers, Indexers and Query Components.

Crawl Component –  It is commonly referred to as the crawling component or indexer. Crawl component is hosted on an Index server and its primary responsibility is to build indexes. Unlike the previous version of SharePoint the crawl component is stateless; meaning the index that is created is not actually stored in the crawl component. The index is propagated to the appropriate query server. The crawl component runs within MSSearch.exe process and the MSSearch.exe process is a windows service “SharePoint Server Search 14”.  

Crawl Database – As you just learned, the crawling component itself is stateless.
State is actually managed in the crawl database which will track what needs to be crawled and what has been crawled. When a crawler component is provisioned, it requires a mapping to a SQL crawl database.   Both of these can be created by using either Central Administrator or PowerShell.

A crawl component can only map to one SQL Crawl database. Multiple crawl components can map to the same Crawl database. By having multiple crawl components mapped to the same crawl database, fault tolerance is achieved. If the Index server hosting crawl component 1 crashes, crawl component 2 picks up the additional load while 1 is down. If the server hosting crawl component 1 crashes, crawl component 2 picks up the additional load while 1 is down. Performance is improved in this setup because you effectively now have two indexers crawling the content instead of one. If you’re not satisfied with crawl times, simply add an additional crawl component mapped to the same crawl DB. The load is distributed across both index servers.

Indexers – Indexers are“Server(s) hosting a crawl component(s)” associated to that crawl database that is responsible to crawl that host or Content Sources associated with the Search Service Application. When multiple crawl databases exist, an attempt is made to distribute these host entries or Content sources evenly. Index is no longer a single point of failure and is stored on Query servers. The Query component holds the entire index or partition of an index.

Query Component – This is the component that will perform a search against an index created by the crawler component. It is also commonly referred to as the query server. A Query Server is a server that runs one or more Query Components. These servers hold a full or partial of the search index. Query Servers are now the sole owner of storing the index on the file system. As stated above, the indexer crawls content and builds a temporary index.The Indexer propagates portions of the temporary index over to Query Server to be indexed. Query Servers contain a copy of the entire or partial index referred to as an Index Partition.

In previous builds of SharePoint, every query server stored the entire index. While this achieved fault tolerance it didn’t help with performance. There is a direct correlation between the size of an index and query latency. The size of an index can easily become a bottleneck for query performance.

Index Partition – Is a new feature of SharePoint 2010 and is directly correlated to the query component.
We now have the ability to break the indexes into multiple partitions to improve the amount of time it takes to perform a search by the query component. For every query component there will be a single index partition that is queried by the query component. Another way of putting it is, every time a query component is created, another index partition is created. By creating additional query components, a new index partition is created that owns a portion of the index.

By partitioning large indexes, query times are reduced and a solution to this type of bottleneck can be solved. Partitioning an index is as simple as provisioning new Query Components from the Search Application Topology section in Central Administrator. The crawler evenly distributes crawled content to Index Partitions using a hash algorithm based on Doc ID’s.

Index Partition Mirror – There is a new capability to create mirrors of the index partitions.
These mirrors again provide the ability to provide fault tolerance. It’s highly recommended to create fault tolerance with your index. This is accomplished by mirroring a Query component assigned to a different server. Under the Search Application Topology, you can simply select the Query Component and Add mirror:

Property Database – Stores metadata and security information items in the index.
The property database will be associated to one or more query components and is user as part of the query process. These properties will be populated as part of the crawling process which creates the index.

Just like Query components, Property Store DB can be scaled out and share the load of the metadata stored in the Property Store DB. If the Property Store DB becomes a bottleneck due to the size of the database and\or strains the disk subsystem with high I/O latency on the back end, a new Property Store DB can be provisioned to share the load.  Just like the Crawl DB, the Property Store DB is useless unless it’s mapped to something.  In this case, a Property Store DB must be mapped to a Query component. If a decision is made to provision an additional Property Store DB to boost performance, an additional non-mirrored Query Component must be provisioned and mapped to it.


Query Processor – Property Store DB and Query component scale out is only half of the battle. The Query Processor remains and still plays a vital role in Search 2010. The Query processor is responsible for processing a Query and runs under w3wp.exe process.  It retrieves results from Property Store DB and the Index\Query Components. Once results are retrieved, they are packaged\security trimmed and delivered back to the requester which is the WFE that initiated the request.  The Query Processor will load balance request if more than one Query Component (mirrored) exists within the same Index Partition.  The exception to this rule is if one of the Query Component’s is marked as fail over only.

Just like the Query Component and Property Store DB, the Query Processor role can be scaled out to multiple servers.

Wednesday, January 16, 2013

Powershell script to get SharePoint Workflow History List Items Count

The Powershell script below generates a CSV report that counts the number of SharePoint Workflow History List Items for all site collections and sub-sites within a web application. It also displays a progress bar as you loop through multiple sites/sub-sites and gives you real time processing information.

Copy the PowerShell script below and paste it in a notepad and save it with a .ps1 extension in any of your local drives.
=====================================================================

param
(
   $url
)

Add-PSSnapin Microsoft.SharePoint.PowerShell -ea SilentlyContinue

## SharePoint DLL
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")

if(![string]::IsNullOrEmpty($url))
{
        try
        {
        $webAppURI = New-Object Uri($url)
        $spwebapp = [Microsoft.SharePoint.Administration.SPWebApplication]::Lookup($webAppURI)
        }
        catch [System.Exception]
        {
        Write-Warning "Web Application could not be found on the server. Check Webapplication URL before executing this script."
        exit;
        }
       
        $spbasetypegenericlist=[Microsoft.SharePoint.SPBaseType]::GenericList

        $output = @()
        $heading = "Site Collection URL; Site URL; Site Name; List Title; List URL; List Item Count"
        $filename = "." + $spwebapp.Name + "-" + $(Get-Date -Format MM-dd-yyyy-HH-mm) + ".csv"
       
        #Write CSV file Headers
        Out-File -FilePath $filename -InputObject $heading;
       
         #Log file name
        $logName = "." + $spwebapp.Name + "-" + $(Get-Date -Format MM-dd-yyyy-HH-mm) + ".log"
        #Write-Output "Site Collection URL; Site URL; Site Name; List Title; List URL; List Item Count" | Out-File $logname
     
        foreach ($spsite in $spwebapp.Sites)
        {
            try
             {
                #Progress Bar
                $sitesProcessed = 0;
                $siteMax = $spsite.AllWebs.Count;

                foreach ($spweb in $spsite.AllWebs)
            {
                    #Display Progress Bar on Site Completion
                    $sitesProcessed++;
                    $percent = ($sitesProcessed/$siteMax) * 100
                    Write-Progress -Activity "Looping through Sites" -PercentComplete $percent -CurrentOperation "$sitesProcessed / $siteMax" -Status "$spweb.Title"

                    try
                    {
                        $spgenericlists = $spweb.getlistsoftype($spbasetypegenericlist)
                   
                        if ($spgenericlists -ne $null)
                        {
                    foreach ($list in $spgenericlists)
                    {
                    if($list -ne $null)
                                {
                                    if ($list.basetemplate -eq "WorkflowHistory")
                        {
                        $output += $($spsite.Url + ";" + $spweb.Url + ";" + $spweb.Title  + ";" + $list.Title  + ";" + $list.DefaultViewUrl + ";" + $list.ItemCount)
                        }
                                }
                    }
                        }
                    }
                    catch
                    {
                        Write-Host "Exception thrown at" $spsite.Url $spweb.Url $list.Title
                        Write-Error ("Exception thrown at:" + $_)
                       
                        #Write Exception to Log file
                        Write-Output "Exception thrown at" $spweb.Url $list.Title $list.DefaultViewUrl $_ | Out-File $logname -append
                    }
                   
            $spweb.Dispose()
                 }
              }
              catch [System.Exception]
              {
                  Write-Host "Exception at" $spsite.Url $spweb.Url $list.Title
                  Write-Warning ("Exception thrown at: " + $_)
                 
                  #Write Exception to Log file
                  Write-Output "Exception thrown at" $spweb.Url $list.Title $list.DefaultViewUrl $_ | Out-File $logname -append
              }
              finally
              {
                  $spsite.Dispose()
              }
        }
       
        if($output -ne $null)
        {
            $output | Out-File $filename -Append
            Write-Host "Output file has been created successfully."
           
            #Write-Output $output | Out-File $logname -append
        }
        else
        {
            Write-Warning "Error creating the CSV file."
           
            #Write Exception to Log file
            Write-Output "Error creating the CSV file" | Out-File $logname -append
            exit
        }
}
else
{
    Write-Warning "Web Application URL parameter cannot be blank."
    Write-Warning("Use Syntax: .GetAllWFHistoryItemCount.ps1 -url <Your Web App URL>")
exit
}

Write-Host "Finished"

==============================================================================

Automate the above .ps1 script as a batch utility, Copy and paste code below and save it with a .bat file extension, change the script file name and enter your web application URL, in the highlighted yellow section, save and run the automated batch file.

cd /d %~dp0
powershell -noexit -file ".GetWorkflowHistoryListItem.ps1-url "https://sharepointfix.com "%CD%"
pause

Run the batch file and import the CSV file generated into an Excel sheet. Delimit the columns with a "*" and then check the Count of Workflow History List Items for each of your Site Collection and Sub sites within.

Monday, October 29, 2012

Introduction to SharePoint 2013 App Model - A Primer

SharePoint 2013 introduces the new App Model that adds another dimension to the kinds of solutions you can build on the SharePoint technology platform in addition to Full Trust Solutions and Sandboxed solutions.

Lets take a deep dive on the App Model and understand the fundamental building blocks. 

I. SharePoint 2013 App Model Highlights:
  1. SharePoint applications no longer live in SharePoint
  2. Custom code executes in the client, cloud or on-prem
  3. Apps are granted permissions to SharePoint via OAuth
  4. Apps communicate with SharePoint via REST / CSOM
  5. Acquire apps via centralized Marketplace, Corporate Marketplace, Public Marketplace (via submission process)
  6. APIs for manual deployment
  7. Everything in a SharePoint site is an app: Contact form, Travel request, Shared Documents library, Contacts list
  8. Apps for SharePoint mimics Facebook Apps to an extent.
II. SharePoint 2013 App Model Benefits:
  1. No custom code on the SharePoint server
  2. Easier to upgrade to future versions of SharePoint
  3. Works in hosted environments w/o limitations
  4. Reduces the ramp-up time for those building apps
  5. Don’t need to know/be as familiar with SharePoint “-isms”
  6. Leverage hosting platform features in new apps
  7. Enables taking SharePoint apps to different levels – further than what can be done with farm / sandbox solutions
  8. Isolation – private vs. public clouds
III. SharePoint 2013 Application Architecture: The diagram below talks about the SP 2013 Application Architecture and its components.

REST / CSOM - are the programmatic approaches available to access SP 2013 data from Apps.
Remote Event Receivers - To handle events in an app for SharePoint remotely, you create remote event receivers and app event receivers.
BCS - Apps can perform CRUD operations on external data store using ODATA by leveraging External Content Types and External Lists.

IV. SharePoint 2013 App URL:
V. SharePoint 2013 Application Comparison Chart: Lets see what programming options we have while creating Apps for SharePoint.
VI. Different kinds of Apps for SharePoint 2013:  Here are 3 different kinds architecture approaches available for creating SharePoint 2013 Apps.
 
 1. SharePoint-Hosted App:
  •     SharePoint hosted apps wholly reside in SharePoint
  •     Uses SharePoint artifacts (lists/libraries)
  •     Business logic executes or on the client
  •     HTML5
  •     JavaScript using CSOM or REST API's
 2. Cloud based Apps:
  •     Cloud hosted apps primarily execute outside of SharePoint
  •     May use SharePoint artifacts (lists/libraries)
  •     Communicate via CSOM / REST
  •     Granted permission to SharePoint via OAuth
  •     Business logic lives & executes outside of SharePoint
  •     On-Premise hosted web application
  •     Windows Azure
  •     3rd party host
  •     Managed CSOM (Client Side Object Model) can be adopted as a programming model for both     these kinds of Apps.   
  •     Within cloud based apps, we have a further bifurcation between:  
  •     Provider-Hosted Apps- Apps developed/maintained on Premises or a Private Cloud.  
  •     Auto-Hosted Apps - Apps provisioned using Windows Azure Auto-Hosting. SharePoint deploys ASP.NET application & SQL Azure DB to Azure automatically when SharePoint app is installed.
VII. SharePoint 2013 Application UX (User Experience):

VIII. SharePoint 2013 Application Scopes:
i. Web scope - By default all SharePoint 2013 SharePoint apps are scoped to Web.
ii. Tenant scope - Cloud based apps can have their Apps as tenant scoped. For e.g.: Apps hosted on Office 365 can have a Tenant scope for privacy and security. Not Applicable to SharePoint Hosted Apps.

IX. SharePoint 2013 App Hosting Options: Cloud v/s SharePoint
 
 X. SharePoint 2013 Application Isolation:
  • When apps are provisioned, new SPWeb (AppWeb) created within hosting SPWeb
  • Each app resides within it’s own SPWeb for isolation
  • Special DNS address configured by administrators
  • App SPWeb’s live in separate domain (DNS)
  • Each App hosted on it’s own unique URL because:
  • Blocks XSS: isolation to special SPWeb under special domain blocks cross site scripting
  • Enforces App Permissions: apps communicate with sites via CSOM /API & must be granted to do so
XI. Obtaining SharePoint 2013 Applications:
 Applications can be acquired multiple ways:
  • Public Marketplace
  • Similar Windows Phone Marketplace
Subject to submission process & approval
  • App Catalog
  • Apps developed internally
Apps acquired and approved for internal use
  • Custom Deployment Process
  • Developers can use remote / local SharePoint & Windows Azure APIs to deploy apps with custom code. These APIs are restricted to the developer site for tooling scenarios

Thursday, September 13, 2012

Powershell script to Export and Import Managed Metadata Termstore across SharePoint farms while still retaining its GUIDs

While migrating your site collections from one farm to the other, Managed Metadata termsets being used and stored in lists and libraries for various site collections would reference to the GUID’s of the original managed metadata term store. The site columns themselves, would reference the GUID of the term sets of the source managed metadata service. Hence, it becomes difficult to migrate the various site collections to the new farm. In this situation, we run the risk of making the existing managed metadata columns being orphaned for the source site collection to be migrated.

Below mentioned PowerShell script Exports and Imports Managed Metadata termstore still retaining its GUID's (sspId's- used internally by the Termstore) and referred by Managed metadata columns in list/library.

#Export Managed Metadata Taxonomy Name

$managedMetadataAppSource = “4a867ce5-d9ee-4051-8e73-c5eca4158bcd”; #this sets the exporting MMS ID
$mmSourceProxy = Get-SPServiceApplicationProxy | ?{$_.TypeName -eq "Managed Metadata Service Connection"};
Export-SPMetadataWebServicePartitionData -Identity $managedMetadataAppSource -ServiceProxy $mmSourceProxy -Path "C:\ExportManagedMetadata\locationexportfile.bak";

#Import Managed Metadata Taxonomy Name

$managedMetadataAppTarget = "d045d3ce-e947-4465-b039-0dfbbe24fb22"   #this sets the importing MMS ID
$mmTargetProxy = Get-SPServiceApplicationProxy | ?{$_.TypeName -eq "Managed Metadata Service Connection"};
Import-SPMetadataWebServicePartitionData -Identity $managedMetadataAppTarget -ServiceProxy $mmTargetProxy -Path "C:\ImportManagedMetadata\locationexportfile.bak" -OverwriteExisting;