Save a SharePoint Library as a Site Collection List Template

Save a SharePoint Library as a Site Collection List Template

SharePoint has the capability to create list templates for future library creation. This is very useful, but the methods to do this programatically are not obvious. Here’s a script I wrote to loop through a specific set of Site Collections off a managed path, and generate a new list template, after first ensuring any previous list template is deleted. Trying to create a template with the name of an existing template will fail.

 

Add-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue
$envrun="Prod"          # selects environment to run in
$TemplateName="MyTemplate Library"
$DeleteTemplates=$true;
 
if ($envrun -eq "Dev")
{
}
elseif ($envrun -eq "Prod")
{
$siteUrl = "http ://SharePoint/ManagedPath/"
$LoopString = "A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,09"
$LoopStringArr = $LoopString.Split(“,”)
}
elseif ($envrun -eq "UAT")
{
}
else
{
Write-Host "ENVIRONMENT SETTING NOT VALID: script terminating..."
$siteUrl =  $null;
return;
}
 
foreach ($letter in $LoopStringArr)
{
$SiteName=$siteurl+$letter
 
$rootSite = New-Object Microsoft.SharePoint.SPSite($SiteName)
 
 $site=$rootSite  #skipping traversing all sites for now
  write-host $site.Url
 
  if ($DeleteTemplates)
  {
    $rootWeb = $site.RootWeb;
    $l=$rootWeb.Lists
    $lt=$l["list template gallery"]
    $ltItems= $lt.get_Items()
 
    if ($ltItems.Count -gt 0)
    {   # count down, to avoid a delete causing index shift left
        for ($i=$ltItems.Count-1; $i -ge 0; $i--)
        {
          $TemplateItem=$ltItems.get_Item($i)
          if ($TemplateItem.listitems[0].name -like "MyTemplate*")
          {
          try 
            {
            $itemBeingdeletedName=$TemplateItem.listitems[0].name
            $itemBeingdeletedTitle=$TemplateItem.listitems[0].Title
            $TemplateItem.delete()
            Write-Host "Deleted $($itemBeingDeletedTitle),$($itemBeingDeletedName)"
            }
            catch
            {
            Write-Host "FAILED to Delete $($itemBeingDeletedTitle),$($itemBeingDeletedName)"
            }
          }
        }
      }
    }
 
 
 
  if ($true) #optional location for a filter
#  if ($site.Url -like "$siteurl/personal/plau*")  
  {
   Write-Host -foregroundcolor blue "$($site.id) - $($site.Url) - $($site.contentdatabase.id) - $($site.contentdatabase.name)"  
 
 
   $Mylib=$site.RootWeb.lists["MyTemplate DocLib Template"]
   if ($Mylib -eq $null)
   {$Mylib=$site.RootWeb.lists["AnyTemplate DocLib Template"]}  #alternate name
   try
   {$Mylib.SaveAsTemplate($TemplateName,$TemplateName,$TemplateName,$false)}
   catch
   {Write-Host "You gotta use a new template name each time Dude!" -ForegroundColor darkred}
  #Note: $rootweb.Update() is NOT requried
 
} #if Siteurl passes the filter test
} #loop of site collections

Propagating legacy Document IDs into SharePoint

Document IDs in SharePoint

Document IDs are generated automatically by SharePoint when configured, but what if you have your own legacy Document IDs that you want to use? There is a way to push them in.

The first trick is recognizing there are two internal fields created when Document IDs are enabled.
ows__dlc_DocId : The actual Document ID
ows__dlc_DocIdUrl : The special Redir.ASPX based URL with the document ID within it

Let’s examine the later field in a bit more detail. This contains a URL with a reference to “DocIdRedir.aspx” which is an ASPX ghosted page that will redirect either via local service lookup, or a fallback to a search based lookup. Here’s what it looks like:

http ://server/_layouts/DocIdRedir.aspx?ID=DOC-1000-3109, DOC-1000-3109″

Note the comma, then the repeated Document ID.

Now imagine if there’s a field in our document library that has the legacy Document ID we want to use.

The script below goes through, and systematically replaces the existign Document ID with the Legacy one we want to use.

[system.reflection.assembly]::LoadWithPartialName("Microsoft.SharePoint") 
Write-Host ("$(get-date) Running script to assing DOcIDs")
$SiteFilter = "*"
$siteUrl = "http ://SharePoint" #use your own Web App URL
 
$ThisLibOnly = $null; #allows for filtered selection of libraries
      
Write-Host ("$(get-date) Assigning DocIDs from Legacy IDs")
 
$webApp=Get-SPWebApplication $SiteURL
$Sites = $webApp | Get-SPSite -Limit all
foreach ($site in $Sites)
{
$OutputSuffix=$loopLetter;
$mylogfile="C:Tempoutput.log"
  write-host $site.Url
 
   Write-Host -foregroundcolor darkblue "$($site.id) - $($site.Url) - $($site.contentdatabase.id) - $($site.contentdatabase.name)"  
 
   $WebScope = Start-SPAssignment
 
   foreach ( $TargetWeb in $site.AllWebs)
   {
 
        Write-Host("Working in site: " + $TargetWeb)
        $ListOfLists = @();
# Loop through all doc libs
        $lists=$TargetWeb.lists;
        $listsCount=$lists.count
 
    for ($ii=0; $ii -lt $listsCount; $ii++)
    {
    $JPlib=$lists[$ii];
       if ( ($JPlib.BaseType -ne "DocumentLibrary") -or ($JPlib.hidden) )
        {
          # forget the rest and return to top
          Write-Host -foregroundcolor darkred "fast test skipping Library: $($JPlib)";   
        }
        elseif ($JPLib.Title -Match "Photo|Image|SitesAssets|CustomizedsReports|Templates|Pages|Picture|cache|style")
        {
          # forget the rest and return to top
          Write-Host -foregroundcolor darkred "fast test skipping Library because it mentions $Matches: $($JPlib)";   
        }
        elseif (!$JPlib.contenttypesenabled)
        {
 
        continue; # no content types, no interest
        }
        else
        {  
          Write-Host -foregroundcolor green "Processing Library: $($JPlib)";   
          $ListOfLists += $JPlib.title;
        }
    }
 
    foreach ($CurrentLib in $ListofLists)
    {
    $JPlib=$TargetWeb.lists[$CurrentLib];
    $JPlib.title
    if ($JPlib -eq $null)
    {
        Write-Host "COULD NOT GET LIB $($CurrentLib)"
        continue;
    }
 
          Write-Host -foregroundcolor green "Processing Library: $($JPlib) in $($TargetWeb)";   
 
    if (($ThisLibOnly -eq $null) -or
      ($JPlib.title -eq $ThisLibOnly)) 
    {
    $JPItems=$JPlib.Items;
    $JPCount=$JPItems.get_count();
 
    if ($JPCount -eq 0) {continue} #can't do much on a library with no items!
          
    for ($i=0; $i -lt $JPCount; $i++)  #Do not use ItemCount, that one includes folders!
    {   
        $JPItem=$JPItems[$i];
        $SourceValue=$JPItem["LegacyDocID"];
 
        if (($SourceValue -eq $null) -or ($SourceValue.length -le 0))
        {
 
            write-host "-" -nonewline
            continue; #nothing to assign
        }
        elseif ($JPItem["ows__dlc_DocId"] -ne $SourceValue) #avoid reassigning same value
        {
        Write-Host "Old DocID=$($JPItem['ows__dlc_DocId']),LegacyDocID=$($SourceValue)"
        $oldDocIDValue=$JPItem["ows__dlc_DocId"]
        $JPItem["ows__dlc_DocId"] = $SourceValue;
        if ($JPItem["ows__dlc_DocIdUrl"].length -gt 1)
        {
            $JPItem["ows__dlc_DocIdUrl"]= $JPItem["ows__dlc_DocIdUrl"].replace($oldDocIDValue,$SourceValue);
        }
 
        $JPItem.systemupdate() #without generating version
        Write-Host "$($i): $($JPItem.url)"
        }
        else
        {  #special!  $JPItem["ows__dlc_DocIdUrl"]=$null;
        Write-Host "DOcID Match! $($JPItem.url)"
        Write-Host "Old DocID=$($JPItem['ows__dlc_DocId']),LegacyDocID=$($SourceValue)"
        }
             }      
    }
 
Write-Host "+" -NoNewline
 
 
}# Lib loop
     
$JPlib.Update();
$JPlib=$null;
 
} #all libs?
    try
    {
    $TargetWeb.dispose()
    } catch{}
    $TargetWeb=$null;
    Stop-SPAssignment $WebScope
} #all webs?
 
 
} # all sites?
$site.Dispose()
$site=$null;
Stop-SPAssignment $SiteScope
Write-Host "SCRIPT COMPLETE $(get-date)"
Share this entry

Generating automatic emails with embedded reports and link to uploaded CSV

Generating automatic emails with the embedded content

There’s often a need to automate the generation of reports on SharePoint. I typically need to generate such a report based on:
1. Content using search: acquired via a Search Query
2. Content via manual crawl: walking through libraries, even sites, site collections and farms
3. Based on scripted actions taken, such as moving or copying documents on an automated basis.

Other examples I’ve done including reports on Checked Out documents, custom delivered to the people who have documents checked out for 30+days.

There are a few parts to the script below:
1. Acquire the dataset for the report. In this example I walk through app SPItems in a given list.
2. Define how the email gets sent, and to whom, leveraging HTML email
3. Generate the HTML for this report
4. Create a temporary CSV of this report, and upload it to a SharePoint library
5. Send the email, ensuring it contains the summary report and a link to the uploaded CSV

To store the dataset, I use the system.Data.DataTable object, and build from there. It’s easy to copy a DataTable, or drop columns after the HTML is constructed.

I use ConvertToHTML CmdLet to convert the DataTable, explicitly referencing the fields and the order I’d like in the HTML table. However I have to fix up the XML within it. For that I pipeline it through a function called Convert-HTMLEscape

I prefer to tune up the DataTable and drop any unnecessary fields, as ConvertTo-CSV does not allow fields to be specified. I also specify -NoTypeInformation so the first row is the actual field names, allowing it to open correctly in Excel.

Note how the URLs are prettied up on the fly, so they read nicely in the HTML table.

Claims Authentication has a different account format, which I clean up in a separate Function Strip-ClaimsHeader.

Whether a file is checked out, the type of checkout, and who checked it out are some optional attributes I capture in the DataTable. This table is easily extended to support whichever metadata is desired.

here’s the script:

if(!(Get-PSSnapin Microsoft.SharePoint.PowerShell -ea 0)) 
{ 
Write-Progress -Activity "Loading Modules" -Status "Loading Microsoft.SharePoint.PowerShell"
Add-PSSnapin Microsoft.SharePoint.PowerShell 
} 
 
$ToRecipients = "joelplaut@MyDomain.com"
 
$CCRecipients = $ToRecipients; #set to any audience you want, semicolon separated
$ToRecipientsArray = $ToRecipients.Split(",");
$CCRecipientsArray = $CCRecipients.Split(",");
 
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")
 
 
function Save-File([string] $Dir, [string] $PartialName, [system.Data.DataTable] $MyTable)
{
#writes file to the a temporary disk based location.  I add a timestamp to name to ensure uniqueness.  $PartialName is a text description used to start the filename
# export-csv does all the heavy lifting.  Note the attribute "-NoTypeInformation" is needed so there isn't a header on top of the field headers
$timestamp = Get-Date
$timestampstr = $timestamp.Month.ToString("00") + "_" + $timestamp.Day.ToString() + "_" + $timestamp.Year.ToString("0000") + " " + $timestamp.Hour.ToString("00") + "_" + $timestamp.Minute.ToString("00") + "_" + $timestamp.Second.ToString("00") 
 
$FileName = $Dir + '' + $PartialName+'_'+$timestampstr+'.CSV';
 
$MyTable.rows | Export-Csv -Path $FileName -NoTypeInformation
 
return $FileName;
 
}
 
#writes file to the Reporting Library within SharePoint, adds timestamp to name to ensure uniqueness
function Upload-File([string] $FilePath)
{
    $Files = $RepList.rootfolder.Files
    $DocLibName = $RepList.Title; #this is not necessarily accurate 
    #Get File Name from Path
    $FileName = $FilePath.Substring($FilePath.LastIndexOf("")+1)
 
    #delete the File from library, if already exist.  This should never occur
    try
    {
        if($Files.Item($DocLibName +"/" + $FileName))
        {
            $Files.delete($DocLibName +"/" + $FileName)
        }
    } catch {}
    #Get the File
    $File= Get-ChildItem $FilePath
 
    #Add File to the collection
    $x=$Files.Add($DocLibName +"/" + $FileName,$File.OpenRead(),$false)
 
     $ReportFilename = $RepList.ParentWeb.Url + '/'+ $DocLibName + '/' + $Filename;
     return $reportFileName;
 }
 
 
# If we are running in a Claims Authentication environment, we can strip off the claims tags
Function Strip-ClaimsHeader ($s)
{
    if ($s.IndexOf('#') -gt 0)  #handle stripping claims tags off name
    {
            return $s.Substring($s.IndexOf("#")+1)
 
    }
    else
    {
    return $s
    }
}
 
#used for HTML processing and fixup; this is a bit dense, but it supports the PowerShell pipeline, and restores XML tags
Function Convert-HTMLEscape {
 
<#
convert < and > to < and >
It is assumed that these will be in pairs
#>
 
[cmdletbinding()]
 
Param (
[Parameter(Position=0,ValueFromPipeline=$True)]
[string[]]$Text
)
 
Process {
foreach ($item in $text) {
    if ($item -match "<") {
        <#
          replace codes with actual symbols. This line is a shortcut to do two replacements
          with one line of code. The code in the first set of parentheses revised text with "<". This normally gets written to the pipeline. By wrapping it in parentheses it tells PowerShell to treat it as an object so I can then call the Replace() method again and add the >.
        #>
        (($item.Replace("<","<")).Replace(">",">")).Replace(""",'"')
     }
     else {
        #otherwise just write the line to the pipeline
        $item
     }
 }
} #close process
 
} #close function
 
 $env = "Prod"  # I like setting flags for changing environment settings
  
 if ($env -eq "Prod")
 {
    $RefListLocation = "http ://SharePoint/sites/mySiteCollection/Subsite"
    $TempLoc = "C:TempLocation"  # disk location to temporararily store the file
}
 
 
$table = New-Object system.Data.DataTable “JoelFancyReport”  #this is an internal name, that is not very important to set
 
#let's define a field (column) for every possible field; even those we don't yet use in a given report
$col1 = New-Object system.Data.DataColumn Link,([string])
$col2 = New-Object system.Data.DataColumn BusinessProcess,([string])
$col3 = New-Object system.Data.DataColumn DocType,([string])
$col4 = New-Object system.Data.DataColumn Modified,([string])
$col5 = New-Object system.Data.DataColumn Title,([string])
$col6 = New-Object system.Data.DataColumn FileName,([string])
$col7 = New-Object system.Data.DataColumn Editor,([string])
$col8 = New-Object system.Data.DataColumn CheckedOut,([string])
$col9 = New-Object system.Data.DataColumn CheckedOutBy,([string])
$col10= New-Object system.Data.DataColumn FileFolder,([string])
$col11= New-Object system.Data.DataColumn FileLink,([string])
$col12= New-Object system.Data.DataColumn ReportAction,([string])
 
# Just add the columns to the table
$table.columns.add($col1)
$table.columns.add($col2)
$table.columns.add($col3)
$table.columns.add($col4)
$table.columns.add($col5)
$table.columns.add($col6)
$table.columns.add($col7)
$table.columns.add($col8)
$table.columns.add($col9)
$table.columns.add($col10)
$table.columns.add($col11)
$table.columns.add($col12)
 
#we can create multiple tables with the same layout easily. 
# Copy is shown here for reference, and is not used in this simple example
$TableCopy = $table.Copy() 
 
#loop through whatever SPItems need to be looped
# here is where the basic logic gets put, before generating the actual report
# this is just a placeholder loop of items in a list, but this could be anything, including SPQuery results
 
$JPWeb = get-spweb $RefListLocation
$JPList  = $JPWeb.lists["MyList"]
$JPItems = $JPList.items;
$JPItemCount = $JPItems.count;
 
for ($k=0; $k -lt $JPItemsCount; $k++)
{
    $SourceItem = JPItems[$k];
 
                if ($SourceItem.File.CheckOutStatus -ne "None")
                {
                    $ReportAction += ", CheckedOut "+$SourceItem.File.CheckOutStatus+" to:"+$SourceItem.File.CheckedOutBy.displayname;
                    $CheckedOut = "Yes"
                    $CheckedOutBy = $SourceItem.File.CheckedOutBy.displayname;
                }
                else
                {
                    $CheckedOut = "No"
                    $CheckedOutBy = $null;
                }
                    #let's create a new row 
                    $row = $Table.NewRow()
 
                    $FileLink =   $($JPWeb.Url)+'/'+$($SourceItem.url) ;
                    $row.FileLink = $FileLink;
                    $row.Link = "<a href="+'">'+ " Link " + "</a>";
 
                    $FolderURL = $FileLink.substring(0,$FileLink.LastIndexOf('/'));
 
                    $row.FileFolder = $FolderURL;
 
                    $row.Title = $item2.Title;
 
                    $row.FileName= $item2.Name;
 
                    $row.Editor = strip-claimsHeader $item2["Editor"];
 
                    $row.Modified= $item2["Modified"];
 
                    $row.BusinessProcess = $item2["BusinessProcess"] #sample field; add as many as you'd like
                    $row.DocType = $item2["DocType"]
 
                    $row.checkedOut = $CheckedOut;
                    $row.checkedOutBy = $CheckedOutBy;
 
                    $Table.Rows.Add($row)
}
 
 
        $RunStatus = "All the processing completed $(get-date)"
        $Header = "Weekly report for my friends"
        $emailTitle = "Weekly Report email $($RunSummary)"
 
 
# this is a nice table style, collapsed, colored header.    
 # Let's creat the HTML table, specifying only the columns from the table that we'd like to see.  Any other columns won't appear in this report 
# By piping it to the Convert-HTMLEscape function, it fixes up the HTML  

 $MyOutput = $table| ConvertTo-Html Link, BusinessProcess, DocType, Editor, Modified, FileName, Action -head $a -body "
$($rows.count) $($Header)| Convert-HTMLEscape
   # if we need to reuse table, just make a copy of it first.
   # While Convert-HTML offered the luxury of column selection, the Convert-CSV converts every darn column, so let's remove the extra ones 
    now.
        $table.Columns.Remove("Link") #how to remove a Column. links don't look too pretty within a CSV, so for an example I remove it 
         here.
 
$f2 = save-file $TempLoc ($FileDescription) $Table  #Saves the temp file to disk, driven out of the $Table
        $Report2URL = upload-file $f2
 
        $ReportSummary = "Rolled Report Available at: "+"<a href="+'">'+ $FileDescription+" Custom Report " + "</a>"+"";
  #debug technique; it's easy to route the HTML to a file for examination, see the commented out line below:
    #$MyOutput &gt; C:A.html 
[string] $From = "SharePoint Support@MyDomain.com"
[string] $To = $null; #$Recipients #"joelplaut@MyDomain.com"
[string] $Title = $emailTitle;
$SmtpClient = New-Object System.Net.Mail.SmtpClient
$SmtpServer = "mail.MyDomain.com"
$SmtpClient.host = $SmtpServer
$MailMessage = New-Object system.net.mail.mailmessage
 
$mailmessage.from = $From;
foreach ($recip in $ToRecipientsArray)
{
$mailmessage.To.add($recip)
}
foreach ($recip in $CCRecipientsArray)
{
$mailmessage.CC.add($recip)
}
$mailmessage.Subject = $Title
$mailmessage.Body = $myoutput #"Body"
$MailMessage.set_IsBodyHtml($true)
$smtpclient.Send($mailmessage)

Patching SharePoint: Useful Tips

Patching SharePoint

In patching SharePoint, prior to the August 2012 CU (wow, that’s a long time ago), patches had to be applied sequentially as a matched pair. First Foundation, then Server. As of August 2012 CU, there’s a single patch. If you have SP2010 Server, just apply that one CU. Note the 2012 CUs assume SP1 is already installed.

Note: I would not apply a CU unless the last SharePoint Config Wizard ran successfully. That can be seen in Central Admin under Upgrade/Patch Status. The one exception is a farm with a known existing problem running a successful Config Wizard session, in which case a CU is a rational approach to solving this, as long as you have a server snapshot and a full DB Server backup (all SharePoint DBs) to roll back to in case the CU doesn’t solve the Config Wizard session issue. The place to start for Config failures is the log. After any Config Wizard run a log is produced. This log provides (too much) detail. The place in the log to look is the first “ERROR” text.

Sometimes the Registry on a server gets out of sync with the products in the Farm Config DB. Here’s how to very quickly and safely sync them:

 

Set-SPFarmConfig - installedproductsrefresh

Sometimes the CU will complain that the expected version is not found. You can force this to occur, by telling the CU to bypass the detection of correct software, but do so at your own risk:

.glb.exe PACKAGE.BYPASS.DETECTION.CHECK=1

After applying the CU, a reboot is required. The process tells you as much at the end. However on reboot it doesn’t tell you you need to run the Config Wizard. here’s how:

From command line, you can always trigger a Config Wizard session using:

psconfig -cmd upgrade -inplace b2b -wait

More thorough is this command:

1
psconfig -cmd upgrade -inplace b2b -force -cmd applicationcontent -install -cmd installfeatures

August 2012 CU installs clean, but sometimes the bypass detection is needed. Spot testing indicates faster performance in the browswer, and lower RAM utilization when compared against SP1 + June 2011 CU.

Migrating to SharePoint: A rational approach

Migrating to SharePoint

What’s the best way to go about migrating to SharePoint? Of course it depends on where documents are today, the state of the current, if any, SharePoint environment, and how the enterprise operates. Here’s one sensible approach:

  • Understand and assess existing environment
    This includes file shares and SharePoint
  • Capture how users today use documents
    This includes search, edit, collaboration within and outside the organization, versioning and security
  • Document the desired end-state
    This includes Metadata user tagging as well as default tagging, navigation, security and search capabilities, as well as versioning, and collaboration. Metadata analysis includes department and project level needs.
  • Tool selection
    Migration is best achieved via a migration tool. Requirements/size of source/budget drive the selection.
    Some approaches can be found here.
  • Define metadata universe
    Create a centrally managed set of definitions mapped to Content Types and fields.
  • Define Views and metadata navigation
    This supports how users identify documents and narrow their review via forms of filtering.
  • Define Standards
    These include how versioning works, whether documents require approval, document visibility, how documents are managed, and the general limits around document size and
    types.
  • Define site structure
    Programmatic creation of site collections, sites, libraries and possibly folders. This allows creation, prototype and demonstration in a development environment, and the ability to recreate a structure in the Production Environment.
  • Define Security
    Define who can access what, and what level access is needed.
  • Define the mapping and rules
    Document which groups of source documents go to specific areas in the destination SharePoint environment.
  • Migration
    The actual migration of documents, which includes mapping and metadata tagging based on agreed rules.
  • Search
    Creation of search capabilities that meet the needs for locating documents.
  • Governance
    Review, refinement and documentation of how the full environment is managed, controlled, updated and refined, as well as who retains which rights and how those rights are assigned.

This approach covers the basics of an enterprise-class structured document management system.  This is just a summary of the categories of efforts involved in creating a system that would serve users’ needs.  There is no single “one-size fits all” when addressing the unique document management needs of an enterprise, especially when building it to meet the unique needs of a company, especially when optimizing it to be a core efficient capability.  There are a range of further capabilities that can be brought to bear, including analytics, integration, and submitting documents via email, as well as deeper MS-Office integration, that would be determined during the initial scoping and requirements.

SharePoint Timer Job History analysis

Analyzing the SharePoint Timer Job History

At times one needs to analyze the timer job history to see what jobs have ran a long time. The Central Admin user interface limits you to a sequential view, although you can filter by Service or Web Application. The actual timer job history is maintained in a single table in the Config DB. Here’s a simple SQL Select to get the 100 longest running timer jobs in a time range:

 

/****** Script for SelectTopNRows command from SSMS  ******/
SELECT top 100 [Id]
      --,[ServiceId]
      --,[WebApplicationId]
      --,[JobId]
      --,[ServerId]
      ,[Status]
      ,[StartTime]
      ,[EndTime]
      ,[WebApplicationName]
      ,[JobTitle]
      ,[ServerName]
      ,[DatabaseName]
      ,[ErrorMessage]
      ,datediff(S,StartTime,EndTime) as SecondsElapsed
  FROM [SharePoint_2013_Farm].[dbo].[TimerJobHistory]
  where StartTime &gt; '2013-02-12 02:40:00' and EndTime &lt; '2013-02-12 03:55:00'
  --note that the TimerHistory timestamp is always in GMT!
  order by SecondsElapsed desc
$events= Invoke-SQLcmd -Server "NY-SRV-SQLPRD02" -Database SharePoint_2013_Farm "select  JobTitle,WebApplicationName,ServerName,DatabaseName, StartTime,EndTime,ErrorMessage from dbo.TimerJobHistory where Status=3 and StartTime between GETDATE() -1 and GETDATE()"

Then in PowerShell:

foreach($event_num in $event) 
{ 
 
 Invoke-SQLcmd -Server "NY-SRV-SQLPRD04" -Database MYdbreports "insert into [SharePoint].[TimerJobHistory]  ( Status,StartTime,EndTime,JobTitle,ServerName,DatabaseName,ErrorMessage ) values ($event_num.Status,$event_num.StartTime,$event_num.EndTime,$event_num.JobTitle,$event_num.ServerName,$event_num.DatabaseName,$event_num.ErrorMessage)"
 
 }

System characteristics of a well performing SharePoint Farm

Characteristics of a well-performing SharePoint Farm
Tempdb data files

  • Dedicated disks for the tempdb
  • Tempdb should be placed on RAID 10
  • The number of tempdb files should be equal the number of CPU Cores, and the tempdb data files should be set at an equal size

The size of the tempdb data files should be equal to the (Physical memory no. of processor). That’s the initial size of the tempdb data files.

DB management

  •  Enough headroom must be available for the databases and log files, plus enough capacity to keep up the requests
  • Pre grow all databases and logs if you can. Be sure to monitor the sizes so that you do not run out of disk space
  • When using SQL Server mirroring, do not store more than 50 databases on a single physical instance of SQL Server
  • Database servers should not be overloaded by using too many databases or data
  • The content databases must be limited to 200GB. Limit content databases to 200GB Indices must be Defragmented and rebuilt daily, if users can absorb the downtime required to rebuild
  • More than 25% disk space must be free.
  • Another instance of SQL must be created for data exceeding 5 TB.
  • When using SQL Server mirroring, more than 50 databases must not be stored on a single physical instance of SQL Server

Monitor the database server

Key performance counters to monitor are:

  • Network Wait Queue: at 0 or 1 for good performance
  • Average disk queue Length (latency): less than 5 ms
  • Memory used: less than 70%
  • Free disk space: more than 25%

 

Customized scheduled SharePoint Search Alerts in HTML

Customized scheduled SharePoint Search Alerts in HTML

Occasionally I get requests for customized notification about documents within a SharePoint farm. Regular Alerts and SharePoint Search Alerts work great out of the box, but sometimes users want something more such as:
– Complex criteria
– Custom sort sequence
– Custom metadata
– Customized notification frequency
– Custom message text, or subject line
– Refined layout

The solution I’ve used is to script a search query in powerShell, and load it into a formatted HTML table, and schedule it to the desired frequency. I’ll outline the framework below that is easily adapted and extended. Note I wrote this for FAST, but using SharePoint Search Query classes you can achieve similar results in regular SharePoint search.

First, let’s establish some basics about search and notification. For this solution, I only want to return documents that are up to two or three days old, so I grab the date, take two off it, and put it into a format we can use later for the query:

 

 $Mydate=get-date
$MyDate = $Mydate.AddDays(-2)
$MyDateStr = $Mydate.Year.ToString("0000") + "-" + $Mydate.Month.ToString("00")  + "-" + $Mydate.day.ToString("00")  #formatted YYYY-MM-DD

Let’s now set up for the search querty. I chose to use FQL (FAST Query Language), but you can use Keyword Search. Note SQL Search is deprecated. I chose 50 results, but you can choose whatever amount you prefer:

 

  $site = New-Object Microsoft.SharePoint.SPSite $webappurl
$vc =New-Object Microsoft.Office.Server.Search.Query.KeywordQuery $site
$vc.ResultsProvider = [Microsoft.Office.Server.Search.Query.SearchProvider]::FASTSearch
$vc.ResultTypes = [Microsoft.Office.Server.Search.Query.ResultType]::RelevantResults
#In my case I enabled the FQL syntax and set some other parameters:
$vc.EnableFQL = $true # enable FQL
$vc.RowLimit = 50 # sets the limit of results
$vc.StartRow = 0 # 0 is the default

Now let’s make sure the query returns the fields you want. These must be Managed Properties configured followed by a Full Crawl.

  $vc.SelectProperties.Add("Company Name")
$vc.SelectProperties.Add("URL")
$vc.SelectProperties.Add("Title")
$vc.SelectProperties.Add("Filename")
$vc.SelectProperties.Add("Company ClaimNumber")
$vc.SelectProperties.Add("Company PolicyNumber")
$vc.SelectProperties.Add("Company Modified")
$vc.SelectProperties.Add("Company EffectiveYear")

Now let’s piece together the XML of the FQL. Note two strings ($q1 and $q2) are used to construct the query and put into $BigQ, with the date we formatted earlier. We’re looking for documents newer than two days ago, where a particular field CompanyClaimDocumentType equals a specific value (“Specific Value”). Then we execute the FQL:

$q1='and(filter(Company modified:range(datetime("'
      $q2='"), max, from="GT")), filter(CompanyClaimDocumentType:equals("Specific Value")))'
      $BigQ=$q1+$MyDateStr+$q2
 
$vc.QueryText = $BigQ
$results = $vc.Execute()

Now let’s convert the search results into a DataTable to make it easy to shape into an HTML table for the outbound email alert notification, we’ll define the columns, and load the values. One nice thing is to shape the link column to allow a hyperlink embedded in the table for easy user access to the documents. I also structure a special link using the DMF:// protocol supported by MacroView DMF:

$resultsTable = $results.Item([Microsoft.Office.Server.Search.Query.ResultType]::RelevantResults)
$resultsDataTable = $resultsTable.Table
$rows = $resultsDataTable.Rows
 
 
$table = New-Object system.Data.DataTable “SearchReport”
 
$col1 = New-Object system.Data.DataColumn Title,([string])
$col2 = New-Object system.Data.DataColumn CompanyName,([string])
$col3 = New-Object system.Data.DataColumn ClaimNumber,([string])
$col4 = New-Object system.Data.DataColumn Link,([string])
$col5 = New-Object system.Data.DataColumn PolicyNumber,([string])
$col6 = New-Object system.Data.DataColumn Modified,([string])
$col7 = New-Object system.Data.DataColumn EffectiveYear,([string])
$col8 = New-Object system.Data.DataColumn FileName,([string])
$col9 = New-Object system.Data.DataColumn DMF,([string])
 
$table.columns.add($col1)
$table.columns.add($col2)
$table.columns.add($col3)
$table.columns.add($col4)
$table.columns.add($col5)
$table.columns.add($col6)
$table.columns.add($col7)
$table.columns.add($col8)
$table.columns.add($col9)
 
if ($rows.count -gt 0)
    {
    for ($i=0; $i -lt $rows.Count; $i++)
    {
    $row = $table.NewRow()
    $row.Link = "&lt;a&gt;'+ " Link " + "&lt;/a&gt;";
    #$row.DMF = ($row.Link.Replace("http://","DMF://")).replace(" File Link", "SP Explorer")  Took out, doesn't appear to work quite right
    $row.Title = $rows[$i].Title;
    $row.InsuredName= $rows[$i].CompanyName;
    $row.ClaimNumber= $rows[$i].CompanyClaimNumber;
    $row.PolicyNumber= $rows[$i].CompanyPolicyNumber;
    $row.EffectiveYear= $rows[$i].CompanyEffectiveYear;
    $row.FileName= $rows[$i].FileName;
    $row.Modified= $rows[$i].Company Modified.substring(0,10);
 
    $table.Rows.Add($row)
    #("&lt;a&gt;" + $_.Portname.SubString(3) + "&lt;/a&gt;"}}
    }

Now, we want to shape this into an outbound table in the email. To do that we’ll use the handy CovertTo-HTML CmdLet. Just pass in the column names you want to appear. Above I map more than we use below. I shape the borders and colors for a really professional look. However we have a sticky problem where we don’t want it to mess with the embedded HTML such as brackets around the a href for links. To solve that, I pipe the stream into a routine called Convert-HTMLEscape that I will outline shortly. Note the nice H1 header, and we’ll nicely handle single/plural number of rows in the subject line and top of the email:

$a = ""
$a = $a + "TABLE{border-width: 1px;border-style: solid;border-color:black;}"
$a = $a + "Table{background-color:#EFFFFF;border-collapse: collapse;}"
$a = $a + "TH{border-width:1px;padding:5px;border-style:solid;border-color:black;background-color:#DDDDDD}"
$a = $a + "TD{border-width:1px;padding-left:5px;padding-right:3px;border-style:solid;border-color:black;}"
$a = $a + ""
 
#$MyOutput = $rows | ConvertTo-Html Title, Author, URL, Link -body "&lt;H1&gt;Recent Custom Reports&lt;/H1&gt;" -PostContent "Goodbye and thanks for all the fish"
 
if ($rows.count -eq 1)
{
    $Plural=$null;
}
else
{
    $Plural="s";
}
 
$MyOutput = $table| ConvertTo-Html Title, Link, InsuredName, ClaimNumber, PolicyNumber, EffectiveYear, Modified -head $a -body "&lt;H1&gt;$($rows.count) Recent Property Adjuster Report$($Plural)&lt;/H1&gt;"  | Convert-HTMLEscape

Here’s the Convert-HTMLEscape function. It’s a little dense, but works within the pipeline and does what we need, by converting the XML equivalent of the two common XML characters right back to the correct characters, basically undoing the bit of a mess made by ConvertTo-HTML:

Function Convert-HTMLEscape {
 
# convert &amp;lt; and &amp;gt; to &lt;&gt; It is assumed that these will be in pairs
 
[cmdletbinding()]
 
Param (
[Parameter(Position=0,ValueFromPipeline=$True)]
[string[]]$Text
)
 
Process {
foreach ($item in $text) {
    if ($item -match "&amp;lt;") {
        &lt;#
          replace codes with actual symbols. This line is a shortcut to do two replacements with one line of code. The code in the first
          set of parentheses revised text with &amp;quot;.
        #&gt;
        (($item.Replace("&amp;lt;","")).Replace("&amp;quot;",'"')
     }
     else {
        #otherwise just write the line to the pipeline
        $item
     }
 }
} #close process
 
} #close function

Now the easy part, let’s generate the email. Note the format sets isHTML to $true, and uses SMTP:

 

   #email setup, can move to top
#param( 
        [string] $From = "SharePoint Support@MyDomain.com"
        [string] $To = $null; #$Recipients #"joelplaut@MyDomain.com"
        [string] $Title = "Daily Report of updated custom Reports"
        #[string] $Body = "body"
    #)
    $Body = $rows | ConvertTo-Html
    $SmtpClient = New-Object System.Net.Mail.SmtpClient
    $SmtpServer = "mail.Company limited.com"
    $SmtpClient.host = $SmtpServer
 
#    $SmtpClient.Send($From,$To,$Title,$Body)
$MailMessage = New-Object system.net.mail.mailmessage
 
$mailmessage.from = $From;
foreach ($recip in $ToRecipientsArray)
{
    $mailmessage.To.add($recip)
}
foreach ($recip in $CCRecipientsArray)
{
    $mailmessage.CC.add($recip)
}
$mailmessage.Subject = $Title
$mailmessage.Body = $myoutput #"Body"
$MailMessage.set_IsBodyHtml($true)
$smtpclient.Send($mailmessage)

Now let’s put it all together. Note I always set an $env for the environment, to make it easy to test in Dev, before deploying in Production.

# SharePoint Search Alerts
Clear-Host
Write-Host "Start LC Alerts" -ForegroundColor darkblue
Add-PsSnapin Microsoft.SharePoint.PowerShell -erroraction silentlycontinue
 
$ToRecipients = "MyADGroup@MyDomain.com"
$CCRecipients = "joelplaut@MyDomain.com,Bozo@MyDomain.com"
$ToRecipientsArray = $ToRecipients.Split(",");
$CCRecipientsArray = $CCRecipients.Split(",");
 
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")
 
$env="Prod"
if ($env -eq "Dev")
{
    $webappurl = "http ://devFarm/" # Replace with URL of web application that you wish to warm up
    $filterpath = "http ://devFarm/insureds"
}
else
{
    $webappurl = "http ://SharePoint/" # Replace with URL of web application that you wish to warm up
    $filterpath = "http ://SharePoint/insureds" #path for exercising previews
}
 
Function Convert-HTMLEscape {
 
&amp;lt;#
convert &amp;lt; and &amp;gt; to 
It is assumed that these will be in pairs
#&amp;gt;
 
[cmdletbinding()]
 
Param (
[Parameter(Position=0,ValueFromPipeline=$True)]
[string[]]$Text
)
 
Process {
foreach ($item in $text) {
    if ($item -match "&amp;lt;") {
        &amp;lt;#
          replace codes with actual symbols
          This line is a shortcut to do two replacements
          with one line of code. The code in the first
          set of parentheses revised text with &amp;quot;.
        #&amp;gt;
        (($item.Replace("&amp;lt;","")).Replace("&amp;quot;",'"')
     }
     else {
        #otherwise just write the line to the pipeline
        $item
     }
 }
} #close process
 
} #close function
 
 
 
$Mydate=get-date
$MyDate = $Mydate.AddDays(-2)
$MyDateStr = $Mydate.Year.ToString("0000") + "-" + $Mydate.Month.ToString("00")  + "-" + $Mydate.day.ToString("00")  #formatted YYYY-MM-DD
 
$site = New-Object Microsoft.SharePoint.SPSite $webappurl
$vc =New-Object Microsoft.Office.Server.Search.Query.KeywordQuery $site
$vc.ResultsProvider = [Microsoft.Office.Server.Search.Query.SearchProvider]::FASTSearch
$vc.ResultTypes = [Microsoft.Office.Server.Search.Query.ResultType]::RelevantResults
#In my case I enabled the FQL syntax and set some other parameters:
$vc.EnableFQL = $true # enable FQL
$vc.RowLimit = 50 # sets the limit of results
$vc.StartRow = 0 # 0 is the default
 
$vc.SelectProperties.Add("Company Name")
$vc.SelectProperties.Add("URL")
$vc.SelectProperties.Add("Title")
$vc.SelectProperties.Add("Filename")
$vc.SelectProperties.Add("Company ClaimNumber")
$vc.SelectProperties.Add("Company PolicyNumber")
$vc.SelectProperties.Add("Company Modified")
$vc.SelectProperties.Add("Company EffectiveYear")
 
 
#Query / Result
  
      $q1='and(filter(Company modified:range(datetime("'
      $q2='"), max, from="GT")), filter(Company claimdocumenttype:equals("Property Adjuster Reports")))'
      $BigQ=$q1+$MyDateStr+$q2
 
 
 
$vc.QueryText = $BigQ
$results = $vc.Execute()
#$results
 
$resultsTable = $results.Item([Microsoft.Office.Server.Search.Query.ResultType]::RelevantResults)
$resultsDataTable = $resultsTable.Table
$rows = $resultsDataTable.Rows
 
 
$table = New-Object system.Data.DataTable “SearchReport”
 
$col1 = New-Object system.Data.DataColumn Title,([string])
$col2 = New-Object system.Data.DataColumn InsuredName,([string])
$col3 = New-Object system.Data.DataColumn ClaimNumber,([string])
$col4 = New-Object system.Data.DataColumn Link,([string])
$col5 = New-Object system.Data.DataColumn PolicyNumber,([string])
$col6 = New-Object system.Data.DataColumn Modified,([string])
$col7 = New-Object system.Data.DataColumn EffectiveYear,([string])
$col8 = New-Object system.Data.DataColumn FileName,([string])
$col9 = New-Object system.Data.DataColumn DMF,([string])
 
$table.columns.add($col1)
$table.columns.add($col2)
$table.columns.add($col3)
$table.columns.add($col4)
$table.columns.add($col5)
$table.columns.add($col6)
$table.columns.add($col7)
$table.columns.add($col8)
$table.columns.add($col9)
 
if ($rows.count -gt 0)
    {
    for ($i=0; $i -lt $rows.Count; $i++)
    {
    $row = $table.NewRow()
    $row.Link = "&lt;a&gt;'+ " Link " + "&lt;/a&gt;";
    #$row.DMF = ($row.Link.Replace("http://","DMF://")).replace(" File Link", "SP Explorer")  Took out, doesn't appear to work quite right
    $row.Title = $rows[$i].Title;
    $row.InsuredName= $rows[$i].CompanyName;
    $row.ClaimNumber= $rows[$i].CompanyClaimNumber;
    $row.PolicyNumber= $rows[$i].CompanyPolicyNumber;
    $row.EffectiveYear= $rows[$i].CompanyEffectiveYear;
    $row.FileName= $rows[$i].FileName;
    $row.Modified= $rows[$i].Company Modified.substring(0,10);
 
    $table.Rows.Add($row)
    #("&lt;a&gt;" + $_.Portname.SubString(3) + "&lt;/a&gt;"}}
    }
 
    $a = ""
    $a = $a + "TABLE{border-width: 1px;border-style: solid;border-color:black;}"
    $a = $a + "Table{background-color:#EFFFFF;border-collapse: collapse;}"
    $a = $a + "TH{border-width:1px;padding:5px;border-style:solid;border-color:black;background-color:#DDDDDD}"
    $a = $a + "TD{border-width:1px;padding-left:5px;padding-right:3px;border-style:solid;border-color:black;}"
    $a = $a + ""
 
    #Filename removed at My's suggestion
 
    #$MyOutput = $rows | ConvertTo-Html Title, Author, URL, Link -body "&lt;H1&gt;Recent Custom Reports&lt;/H1&gt;" -PostContent "Goodbye and thanks for all the fish"
     
    if ($rows.count -eq 1)
    {
        $Plural=$null;
    }
    else
    {
        $Plural="s";
    }
 
    $MyOutput = $table| ConvertTo-Html Title, Link, InsuredName, ClaimNumber, PolicyNumber, EffectiveYear, Modified -head $a -body "&lt;H1&gt;$($rows.count) Recent Custom Report$($Plural)&lt;/H1&gt;"  | Convert-HTMLEscape
    #$MyOutput &amp;gt; C:A.html #debug technique
    #email setup, can move to top
    #param( 
            [string] $From = "SharePoint Support@Company limited.com"
            [string] $To = $null; #$Recipients #"joelplaut@Company limited.com"
            [string] $Title = "Daily Report of updated Property Adjuster Reports"
            #[string] $Body = "body"
        #)
        $Body = $rows | ConvertTo-Html
        $SmtpClient = New-Object System.Net.Mail.SmtpClient
        $SmtpServer = "mail.Company limited.com"
        $SmtpClient.host = $SmtpServer
 
    #    $SmtpClient.Send($From,$To,$Title,$Body)
    $MailMessage = New-Object system.net.mail.mailmessage
 
    $mailmessage.from = $From;
    foreach ($recip in $ToRecipientsArray)
    {
        $mailmessage.To.add($recip)
    }
    foreach ($recip in $CCRecipientsArray)
    {
        $mailmessage.CC.add($recip)
    }
    $mailmessage.Subject = $Title
    $mailmessage.Body = $myoutput #"Body"
    $MailMessage.set_IsBodyHtml($true)
    $smtpclient.Send($mailmessage)
 
} #don't bother, no new hits

SharePoint Monitoring Metrics

SharePoint Monitoring Metrics

I pulled together some useful metrics for monitoring SharePoint environments:

Storage

  1. Server drive(s) disk space availability (15% free)
  2. Content DB size, and monthly growth rate
  3. Site Collection size and monthly growth rate (same as above, for where we have one site collection per content DB)
  4. Storage by file type (are people uploading home pictures or MP3s)
  5. MySites site size

Security

  1. Unique permission scopes per library

Usage

  1. Items per library, and monthly growth rate
  2. Most active site / least active site
  3. Libraries/Sites with no activity in 90 days
  4. Documents with over 200 Versions (considering version trimming of middle 75% of versions, if > 100 versions of a document)
  5. Duplicate Documents
  6. Users who do not create documents in 30 days (perhaps they are storing locally or on other media)
  7. Folders with over 2000 items in the root
  8. Any newly created library or site

Search

  1. Search terms resulting in no-hits
  2. Very frequent searches

System

  • Reboots
  • IIS Pool recycle frequency
  • Daily report of Unexpected events in ULS log (could be large, and not actionable)
  • RAM usage > 95%
  • Full Crawl exceeding 20 hours
  • Incremental crawl not running hourly
  • Failed Timer jobs
  • CPU Usage > 90% average over any five minute period
  • Disk queue length on SharePoint server
  • Web Page load time (can have a program pull a page every minute)

Setting the Minimum BLOB Storage Size Correctly

How to set the Minimum BLOB Storage Size

Setting the minimum storage size for BLOBs is a bit tricky.  It turns out that setting the property generally does not work, and the SPRemoteBlobStorageSettings object does not have an associated update() method.  Trying to update the container object (Content DB) doesn’t work either.  The trick is to use the associated method to set the value:

$rbs.set_MinimumBlobStorageSize(number)

Here’s a script that iterates through a set of content DBs, and sets the min blob size.  I prefer to work with lists of DBs:

 

 $DBList="Content_1,Content_12,Content_3"
$DBs=$DBList.split(",")
for ($i=0; $i -lt $DBs.count; $i++)
{
# Provide your content DB name using the name of the Content DB, or if there's only one Content DB for a web app, you can get it by identifying the web app
$cdb=Get-SPContentDatabase -identity $DBs[$i]
 
$rbs = $cdb.RemoteBlobStorageSettings
#here's what not to do. This does not work; min blob size is lost; save conflict
#$rbs.MinimumBlobStorageSize =25kb # this is the minimum size of a file to be sent to the Blob Store; anything smaller is kept in the Content DB.

#a much better approach is to use the method to set the value, this does not require an update() method. Note the SPRemoteBlobStorageSettings ($rbs here is that type) does not have an associated update() method
$rbs.set_MinimumBlobStorageSize(25*1024)
}

Here’s a script to ensure that RBS is configured correctly, and the min blob size is what you expect it to be:

 $DBList="Content_1,Content_12,Content_3"
$DBs=$DBList.split(",")

for ($i=0; $i -lt $DBs.count; $i++)
{
$cdb=Get-SPContentDatabase -identity $DBs[$i]
$rbs = $cdb.RemoteBlobStorageSettings

write-host “RBS installation for $($cdb.name) is $($rbs.Installed())”

write-host “MinBlobStorageSize for $($cdb.Name) is $($rbs.MinimumBlobStorageSize)”
}


	

Newsletters