Propagating legacy Document IDs into SharePoint

Document IDs in SharePoint

Document IDs are generated automatically by SharePoint when configured, but what if you have your own legacy Document IDs that you want to use? There is a way to push them in.

The first trick is recognizing there are two internal fields created when Document IDs are enabled.
ows__dlc_DocId : The actual Document ID
ows__dlc_DocIdUrl : The special Redir.ASPX based URL with the document ID within it

Let’s examine the later field in a bit more detail. This contains a URL with a reference to “DocIdRedir.aspx” which is an ASPX ghosted page that will redirect either via local service lookup, or a fallback to a search based lookup. Here’s what it looks like:

http ://server/_layouts/DocIdRedir.aspx?ID=DOC-1000-3109, DOC-1000-3109″

Note the comma, then the repeated Document ID.

Now imagine if there’s a field in our document library that has the legacy Document ID we want to use.

The script below goes through, and systematically replaces the existign Document ID with the Legacy one we want to use.

[system.reflection.assembly]::LoadWithPartialName("Microsoft.SharePoint") 
Write-Host ("$(get-date) Running script to assing DOcIDs")
$SiteFilter = "*"
$siteUrl = "http ://SharePoint" #use your own Web App URL
 
$ThisLibOnly = $null; #allows for filtered selection of libraries
      
Write-Host ("$(get-date) Assigning DocIDs from Legacy IDs")
 
$webApp=Get-SPWebApplication $SiteURL
$Sites = $webApp | Get-SPSite -Limit all
foreach ($site in $Sites)
{
$OutputSuffix=$loopLetter;
$mylogfile="C:Tempoutput.log"
  write-host $site.Url
 
   Write-Host -foregroundcolor darkblue "$($site.id) - $($site.Url) - $($site.contentdatabase.id) - $($site.contentdatabase.name)"  
 
   $WebScope = Start-SPAssignment
 
   foreach ( $TargetWeb in $site.AllWebs)
   {
 
        Write-Host("Working in site: " + $TargetWeb)
        $ListOfLists = @();
# Loop through all doc libs
        $lists=$TargetWeb.lists;
        $listsCount=$lists.count
 
    for ($ii=0; $ii -lt $listsCount; $ii++)
    {
    $JPlib=$lists[$ii];
       if ( ($JPlib.BaseType -ne "DocumentLibrary") -or ($JPlib.hidden) )
        {
          # forget the rest and return to top
          Write-Host -foregroundcolor darkred "fast test skipping Library: $($JPlib)";   
        }
        elseif ($JPLib.Title -Match "Photo|Image|SitesAssets|CustomizedsReports|Templates|Pages|Picture|cache|style")
        {
          # forget the rest and return to top
          Write-Host -foregroundcolor darkred "fast test skipping Library because it mentions $Matches: $($JPlib)";   
        }
        elseif (!$JPlib.contenttypesenabled)
        {
 
        continue; # no content types, no interest
        }
        else
        {  
          Write-Host -foregroundcolor green "Processing Library: $($JPlib)";   
          $ListOfLists += $JPlib.title;
        }
    }
 
    foreach ($CurrentLib in $ListofLists)
    {
    $JPlib=$TargetWeb.lists[$CurrentLib];
    $JPlib.title
    if ($JPlib -eq $null)
    {
        Write-Host "COULD NOT GET LIB $($CurrentLib)"
        continue;
    }
 
          Write-Host -foregroundcolor green "Processing Library: $($JPlib) in $($TargetWeb)";   
 
    if (($ThisLibOnly -eq $null) -or
      ($JPlib.title -eq $ThisLibOnly)) 
    {
    $JPItems=$JPlib.Items;
    $JPCount=$JPItems.get_count();
 
    if ($JPCount -eq 0) {continue} #can't do much on a library with no items!
          
    for ($i=0; $i -lt $JPCount; $i++)  #Do not use ItemCount, that one includes folders!
    {   
        $JPItem=$JPItems[$i];
        $SourceValue=$JPItem["LegacyDocID"];
 
        if (($SourceValue -eq $null) -or ($SourceValue.length -le 0))
        {
 
            write-host "-" -nonewline
            continue; #nothing to assign
        }
        elseif ($JPItem["ows__dlc_DocId"] -ne $SourceValue) #avoid reassigning same value
        {
        Write-Host "Old DocID=$($JPItem['ows__dlc_DocId']),LegacyDocID=$($SourceValue)"
        $oldDocIDValue=$JPItem["ows__dlc_DocId"]
        $JPItem["ows__dlc_DocId"] = $SourceValue;
        if ($JPItem["ows__dlc_DocIdUrl"].length -gt 1)
        {
            $JPItem["ows__dlc_DocIdUrl"]= $JPItem["ows__dlc_DocIdUrl"].replace($oldDocIDValue,$SourceValue);
        }
 
        $JPItem.systemupdate() #without generating version
        Write-Host "$($i): $($JPItem.url)"
        }
        else
        {  #special!  $JPItem["ows__dlc_DocIdUrl"]=$null;
        Write-Host "DOcID Match! $($JPItem.url)"
        Write-Host "Old DocID=$($JPItem['ows__dlc_DocId']),LegacyDocID=$($SourceValue)"
        }
             }      
    }
 
Write-Host "+" -NoNewline
 
 
}# Lib loop
     
$JPlib.Update();
$JPlib=$null;
 
} #all libs?
    try
    {
    $TargetWeb.dispose()
    } catch{}
    $TargetWeb=$null;
    Stop-SPAssignment $WebScope
} #all webs?
 
 
} # all sites?
$site.Dispose()
$site=$null;
Stop-SPAssignment $SiteScope
Write-Host "SCRIPT COMPLETE $(get-date)"
Share this entry

Generating automatic emails with embedded reports and link to uploaded CSV

Generating automatic emails with the embedded content

There’s often a need to automate the generation of reports on SharePoint. I typically need to generate such a report based on:
1. Content using search: acquired via a Search Query
2. Content via manual crawl: walking through libraries, even sites, site collections and farms
3. Based on scripted actions taken, such as moving or copying documents on an automated basis.

Other examples I’ve done including reports on Checked Out documents, custom delivered to the people who have documents checked out for 30+days.

There are a few parts to the script below:
1. Acquire the dataset for the report. In this example I walk through app SPItems in a given list.
2. Define how the email gets sent, and to whom, leveraging HTML email
3. Generate the HTML for this report
4. Create a temporary CSV of this report, and upload it to a SharePoint library
5. Send the email, ensuring it contains the summary report and a link to the uploaded CSV

To store the dataset, I use the system.Data.DataTable object, and build from there. It’s easy to copy a DataTable, or drop columns after the HTML is constructed.

I use ConvertToHTML CmdLet to convert the DataTable, explicitly referencing the fields and the order I’d like in the HTML table. However I have to fix up the XML within it. For that I pipeline it through a function called Convert-HTMLEscape

I prefer to tune up the DataTable and drop any unnecessary fields, as ConvertTo-CSV does not allow fields to be specified. I also specify -NoTypeInformation so the first row is the actual field names, allowing it to open correctly in Excel.

Note how the URLs are prettied up on the fly, so they read nicely in the HTML table.

Claims Authentication has a different account format, which I clean up in a separate Function Strip-ClaimsHeader.

Whether a file is checked out, the type of checkout, and who checked it out are some optional attributes I capture in the DataTable. This table is easily extended to support whichever metadata is desired.

here’s the script:

if(!(Get-PSSnapin Microsoft.SharePoint.PowerShell -ea 0)) 
{ 
Write-Progress -Activity "Loading Modules" -Status "Loading Microsoft.SharePoint.PowerShell"
Add-PSSnapin Microsoft.SharePoint.PowerShell 
} 
 
$ToRecipients = "joelplaut@MyDomain.com"
 
$CCRecipients = $ToRecipients; #set to any audience you want, semicolon separated
$ToRecipientsArray = $ToRecipients.Split(",");
$CCRecipientsArray = $CCRecipients.Split(",");
 
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
[void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Administration")
 
 
function Save-File([string] $Dir, [string] $PartialName, [system.Data.DataTable] $MyTable)
{
#writes file to the a temporary disk based location.  I add a timestamp to name to ensure uniqueness.  $PartialName is a text description used to start the filename
# export-csv does all the heavy lifting.  Note the attribute "-NoTypeInformation" is needed so there isn't a header on top of the field headers
$timestamp = Get-Date
$timestampstr = $timestamp.Month.ToString("00") + "_" + $timestamp.Day.ToString() + "_" + $timestamp.Year.ToString("0000") + " " + $timestamp.Hour.ToString("00") + "_" + $timestamp.Minute.ToString("00") + "_" + $timestamp.Second.ToString("00") 
 
$FileName = $Dir + '' + $PartialName+'_'+$timestampstr+'.CSV';
 
$MyTable.rows | Export-Csv -Path $FileName -NoTypeInformation
 
return $FileName;
 
}
 
#writes file to the Reporting Library within SharePoint, adds timestamp to name to ensure uniqueness
function Upload-File([string] $FilePath)
{
    $Files = $RepList.rootfolder.Files
    $DocLibName = $RepList.Title; #this is not necessarily accurate 
    #Get File Name from Path
    $FileName = $FilePath.Substring($FilePath.LastIndexOf("")+1)
 
    #delete the File from library, if already exist.  This should never occur
    try
    {
        if($Files.Item($DocLibName +"/" + $FileName))
        {
            $Files.delete($DocLibName +"/" + $FileName)
        }
    } catch {}
    #Get the File
    $File= Get-ChildItem $FilePath
 
    #Add File to the collection
    $x=$Files.Add($DocLibName +"/" + $FileName,$File.OpenRead(),$false)
 
     $ReportFilename = $RepList.ParentWeb.Url + '/'+ $DocLibName + '/' + $Filename;
     return $reportFileName;
 }
 
 
# If we are running in a Claims Authentication environment, we can strip off the claims tags
Function Strip-ClaimsHeader ($s)
{
    if ($s.IndexOf('#') -gt 0)  #handle stripping claims tags off name
    {
            return $s.Substring($s.IndexOf("#")+1)
 
    }
    else
    {
    return $s
    }
}
 
#used for HTML processing and fixup; this is a bit dense, but it supports the PowerShell pipeline, and restores XML tags
Function Convert-HTMLEscape {
 
<#
convert < and > to < and >
It is assumed that these will be in pairs
#>
 
[cmdletbinding()]
 
Param (
[Parameter(Position=0,ValueFromPipeline=$True)]
[string[]]$Text
)
 
Process {
foreach ($item in $text) {
    if ($item -match "<") {
        <#
          replace codes with actual symbols. This line is a shortcut to do two replacements
          with one line of code. The code in the first set of parentheses revised text with "<". This normally gets written to the pipeline. By wrapping it in parentheses it tells PowerShell to treat it as an object so I can then call the Replace() method again and add the >.
        #>
        (($item.Replace("<","<")).Replace(">",">")).Replace(""",'"')
     }
     else {
        #otherwise just write the line to the pipeline
        $item
     }
 }
} #close process
 
} #close function
 
 $env = "Prod"  # I like setting flags for changing environment settings
  
 if ($env -eq "Prod")
 {
    $RefListLocation = "http ://SharePoint/sites/mySiteCollection/Subsite"
    $TempLoc = "C:TempLocation"  # disk location to temporararily store the file
}
 
 
$table = New-Object system.Data.DataTable “JoelFancyReport”  #this is an internal name, that is not very important to set
 
#let's define a field (column) for every possible field; even those we don't yet use in a given report
$col1 = New-Object system.Data.DataColumn Link,([string])
$col2 = New-Object system.Data.DataColumn BusinessProcess,([string])
$col3 = New-Object system.Data.DataColumn DocType,([string])
$col4 = New-Object system.Data.DataColumn Modified,([string])
$col5 = New-Object system.Data.DataColumn Title,([string])
$col6 = New-Object system.Data.DataColumn FileName,([string])
$col7 = New-Object system.Data.DataColumn Editor,([string])
$col8 = New-Object system.Data.DataColumn CheckedOut,([string])
$col9 = New-Object system.Data.DataColumn CheckedOutBy,([string])
$col10= New-Object system.Data.DataColumn FileFolder,([string])
$col11= New-Object system.Data.DataColumn FileLink,([string])
$col12= New-Object system.Data.DataColumn ReportAction,([string])
 
# Just add the columns to the table
$table.columns.add($col1)
$table.columns.add($col2)
$table.columns.add($col3)
$table.columns.add($col4)
$table.columns.add($col5)
$table.columns.add($col6)
$table.columns.add($col7)
$table.columns.add($col8)
$table.columns.add($col9)
$table.columns.add($col10)
$table.columns.add($col11)
$table.columns.add($col12)
 
#we can create multiple tables with the same layout easily. 
# Copy is shown here for reference, and is not used in this simple example
$TableCopy = $table.Copy() 
 
#loop through whatever SPItems need to be looped
# here is where the basic logic gets put, before generating the actual report
# this is just a placeholder loop of items in a list, but this could be anything, including SPQuery results
 
$JPWeb = get-spweb $RefListLocation
$JPList  = $JPWeb.lists["MyList"]
$JPItems = $JPList.items;
$JPItemCount = $JPItems.count;
 
for ($k=0; $k -lt $JPItemsCount; $k++)
{
    $SourceItem = JPItems[$k];
 
                if ($SourceItem.File.CheckOutStatus -ne "None")
                {
                    $ReportAction += ", CheckedOut "+$SourceItem.File.CheckOutStatus+" to:"+$SourceItem.File.CheckedOutBy.displayname;
                    $CheckedOut = "Yes"
                    $CheckedOutBy = $SourceItem.File.CheckedOutBy.displayname;
                }
                else
                {
                    $CheckedOut = "No"
                    $CheckedOutBy = $null;
                }
                    #let's create a new row 
                    $row = $Table.NewRow()
 
                    $FileLink =   $($JPWeb.Url)+'/'+$($SourceItem.url) ;
                    $row.FileLink = $FileLink;
                    $row.Link = "<a href="+'">'+ " Link " + "</a>";
 
                    $FolderURL = $FileLink.substring(0,$FileLink.LastIndexOf('/'));
 
                    $row.FileFolder = $FolderURL;
 
                    $row.Title = $item2.Title;
 
                    $row.FileName= $item2.Name;
 
                    $row.Editor = strip-claimsHeader $item2["Editor"];
 
                    $row.Modified= $item2["Modified"];
 
                    $row.BusinessProcess = $item2["BusinessProcess"] #sample field; add as many as you'd like
                    $row.DocType = $item2["DocType"]
 
                    $row.checkedOut = $CheckedOut;
                    $row.checkedOutBy = $CheckedOutBy;
 
                    $Table.Rows.Add($row)
}
 
 
        $RunStatus = "All the processing completed $(get-date)"
        $Header = "Weekly report for my friends"
        $emailTitle = "Weekly Report email $($RunSummary)"
 
 
# this is a nice table style, collapsed, colored header.    
 # Let's creat the HTML table, specifying only the columns from the table that we'd like to see.  Any other columns won't appear in this report 
# By piping it to the Convert-HTMLEscape function, it fixes up the HTML  

 $MyOutput = $table| ConvertTo-Html Link, BusinessProcess, DocType, Editor, Modified, FileName, Action -head $a -body "
$($rows.count) $($Header)| Convert-HTMLEscape
   # if we need to reuse table, just make a copy of it first.
   # While Convert-HTML offered the luxury of column selection, the Convert-CSV converts every darn column, so let's remove the extra ones 
    now.
        $table.Columns.Remove("Link") #how to remove a Column. links don't look too pretty within a CSV, so for an example I remove it 
         here.
 
$f2 = save-file $TempLoc ($FileDescription) $Table  #Saves the temp file to disk, driven out of the $Table
        $Report2URL = upload-file $f2
 
        $ReportSummary = "Rolled Report Available at: "+"<a href="+'">'+ $FileDescription+" Custom Report " + "</a>"+"";
  #debug technique; it's easy to route the HTML to a file for examination, see the commented out line below:
    #$MyOutput &gt; C:A.html 
[string] $From = "SharePoint Support@MyDomain.com"
[string] $To = $null; #$Recipients #"joelplaut@MyDomain.com"
[string] $Title = $emailTitle;
$SmtpClient = New-Object System.Net.Mail.SmtpClient
$SmtpServer = "mail.MyDomain.com"
$SmtpClient.host = $SmtpServer
$MailMessage = New-Object system.net.mail.mailmessage
 
$mailmessage.from = $From;
foreach ($recip in $ToRecipientsArray)
{
$mailmessage.To.add($recip)
}
foreach ($recip in $CCRecipientsArray)
{
$mailmessage.CC.add($recip)
}
$mailmessage.Subject = $Title
$mailmessage.Body = $myoutput #"Body"
$MailMessage.set_IsBodyHtml($true)
$smtpclient.Send($mailmessage)

Quick Inventory of SharePoint Databases

Inventory SharePoint Databases

Often I have to quickly analyze SharePoint databases; both Content DBs as well as the full set of existing databases. This could be to see which of the databases are still a part of the farm. Here’s how to quickly see which Content DBs are mounted in the Farm:

 

get-spcontentdatabase

For the more general set of all databases associated with the farm, it is nice to know the type of each database. Here’s a quick query to get that for you:

Get-SPDatabase | select name, type
 
Happy farming!

Reporting on SharePoint Event Receivers

SharePoint Event Receiver Reporting

Event Receivers are a very powerful SharePoint construct. Because of timing issues, they can be tricky to get right. My good friend Jimil Ibrahim asked about this recently, prompting me to provide just such a reporting capability for general use. Below is a script that will quickly generate a report of all Event Receivers in all Libraries, in all SPWebs in all SPSites. The output is easily imported into MS-Excel for rapid analysis.

$wa=Get-SPWebApplication http ://SharePoint  #susbstitute with your own Web Application.
 
foreach ($site in $wa.Sites)
{
Write-Host "S" -NoNewline
    foreach ($web in $site.allwebs)
    {
    Write-Host "W" -NoNewline
 
        for ($LibIndex=0; $LibIndex -lt $web.Lists.Count; $LibIndex++)
        {
        Write-Host "L" -NoNewline
            $list = $Web.Lists[$LibIndex];
 
            $i=0;
 
 
            foreach ($ER in $list.EventReceivers)
            {
                Add-Content C:ReportsERs.csv "$($Site.url),$($web.title),$($web.url),$($List.title), Event $($i): $($ER.name), $($ER.Synchronization)"
                $i++
            }
 
        }
    }
}

SharePoint Full Farm Site Collection Admin report

Reporting on SharePoint Full Farm Site Collection Admins

Site Collection Administrators retain a great deal of power and capability within a SharePoint Site Collection. Wouldn’t it be nice to know are the Site Collection admins across an entire farm? This script will generate a report that is easily loaded and pivoted in Excel. First, let’s pick a column separator that’s not in use by any of the fields. This is a bit of a challenge, because with Claims based authentication, there’s quite a few characters in use that precludes them being a separator. Here’s the script:

 

$Sep="~"
 Write-Host "WebApp$($sep)SiteCollection$($sep)UserName$($sep)UserLogin"
 $spWebApps = get-spwebapplication
 
 foreach ($spWebApp in $spWebApps)
 {
    foreach($site in $spWebApp.Sites)
    {
        foreach($siteAdmin in $site.RootWeb.SiteAdministrators)
        {
            Write-Host "$($spWebApp.Name)$($sep)$($siteAdmin.ParentWeb.Url)$($sep)$($siteAdmin.DisplayName)$($sep),$($siteAdmin.UserLogin)"
        }
        $site.Dispose()
    }
    $spWebApp=$null; #this is not an IDisposable object
}

Rationalizing SharePoint Storage

SharePoint Storage Management

SharePoint storage tends to grow organically, and in an uneven fashion. Periodically it makes sense to take stock of how Site Collections are distributed amongst Content DBs. The goal should ideally be to keep Content DBs at 50GB or below where possible. When a Site Collection grows to 100GB or more, steps should be taken to manage the growth, as large Content DB performance can degrade, and backup/restore can become lengthy.

Here’s a one-line script that outputs how site collections are distributed among Content DBs, and the size of the Content DB. The results can be pasted into Excel. If needed Excel can separate Text to Columns, allowing you to to pivot larger data sets:

Get-SPContentDatabase | get-spsite -Limit all | % {write-host "$($_.rootweb.url)|$($_.rootweb.title)|$($_.contentdatabase.name)|$($_.ContentDatabase.DiskSizeRequired)"}

Patching SharePoint: Useful Tips

Patching SharePoint

In patching SharePoint, prior to the August 2012 CU (wow, that’s a long time ago), patches had to be applied sequentially as a matched pair. First Foundation, then Server. As of August 2012 CU, there’s a single patch. If you have SP2010 Server, just apply that one CU. Note the 2012 CUs assume SP1 is already installed.

Note: I would not apply a CU unless the last SharePoint Config Wizard ran successfully. That can be seen in Central Admin under Upgrade/Patch Status. The one exception is a farm with a known existing problem running a successful Config Wizard session, in which case a CU is a rational approach to solving this, as long as you have a server snapshot and a full DB Server backup (all SharePoint DBs) to roll back to in case the CU doesn’t solve the Config Wizard session issue. The place to start for Config failures is the log. After any Config Wizard run a log is produced. This log provides (too much) detail. The place in the log to look is the first “ERROR” text.

Sometimes the Registry on a server gets out of sync with the products in the Farm Config DB. Here’s how to very quickly and safely sync them:

 

Set-SPFarmConfig - installedproductsrefresh

Sometimes the CU will complain that the expected version is not found. You can force this to occur, by telling the CU to bypass the detection of correct software, but do so at your own risk:

.glb.exe PACKAGE.BYPASS.DETECTION.CHECK=1

After applying the CU, a reboot is required. The process tells you as much at the end. However on reboot it doesn’t tell you you need to run the Config Wizard. here’s how:

From command line, you can always trigger a Config Wizard session using:

psconfig -cmd upgrade -inplace b2b -wait

More thorough is this command:

1
psconfig -cmd upgrade -inplace b2b -force -cmd applicationcontent -install -cmd installfeatures

August 2012 CU installs clean, but sometimes the bypass detection is needed. Spot testing indicates faster performance in the browswer, and lower RAM utilization when compared against SP1 + June 2011 CU.

Migrating to SharePoint: A rational approach

Migrating to SharePoint

What’s the best way to go about migrating to SharePoint? Of course it depends on where documents are today, the state of the current, if any, SharePoint environment, and how the enterprise operates. Here’s one sensible approach:

  • Understand and assess existing environment
    This includes file shares and SharePoint
  • Capture how users today use documents
    This includes search, edit, collaboration within and outside the organization, versioning and security
  • Document the desired end-state
    This includes Metadata user tagging as well as default tagging, navigation, security and search capabilities, as well as versioning, and collaboration. Metadata analysis includes department and project level needs.
  • Tool selection
    Migration is best achieved via a migration tool. Requirements/size of source/budget drive the selection.
    Some approaches can be found here.
  • Define metadata universe
    Create a centrally managed set of definitions mapped to Content Types and fields.
  • Define Views and metadata navigation
    This supports how users identify documents and narrow their review via forms of filtering.
  • Define Standards
    These include how versioning works, whether documents require approval, document visibility, how documents are managed, and the general limits around document size and
    types.
  • Define site structure
    Programmatic creation of site collections, sites, libraries and possibly folders. This allows creation, prototype and demonstration in a development environment, and the ability to recreate a structure in the Production Environment.
  • Define Security
    Define who can access what, and what level access is needed.
  • Define the mapping and rules
    Document which groups of source documents go to specific areas in the destination SharePoint environment.
  • Migration
    The actual migration of documents, which includes mapping and metadata tagging based on agreed rules.
  • Search
    Creation of search capabilities that meet the needs for locating documents.
  • Governance
    Review, refinement and documentation of how the full environment is managed, controlled, updated and refined, as well as who retains which rights and how those rights are assigned.

This approach covers the basics of an enterprise-class structured document management system.  This is just a summary of the categories of efforts involved in creating a system that would serve users’ needs.  There is no single “one-size fits all” when addressing the unique document management needs of an enterprise, especially when building it to meet the unique needs of a company, especially when optimizing it to be a core efficient capability.  There are a range of further capabilities that can be brought to bear, including analytics, integration, and submitting documents via email, as well as deeper MS-Office integration, that would be determined during the initial scoping and requirements.

SharePoint Timer Job History analysis

Analyzing the SharePoint Timer Job History

At times one needs to analyze the timer job history to see what jobs have ran a long time. The Central Admin user interface limits you to a sequential view, although you can filter by Service or Web Application. The actual timer job history is maintained in a single table in the Config DB. Here’s a simple SQL Select to get the 100 longest running timer jobs in a time range:

 

/****** Script for SelectTopNRows command from SSMS  ******/
SELECT top 100 [Id]
      --,[ServiceId]
      --,[WebApplicationId]
      --,[JobId]
      --,[ServerId]
      ,[Status]
      ,[StartTime]
      ,[EndTime]
      ,[WebApplicationName]
      ,[JobTitle]
      ,[ServerName]
      ,[DatabaseName]
      ,[ErrorMessage]
      ,datediff(S,StartTime,EndTime) as SecondsElapsed
  FROM [SharePoint_2013_Farm].[dbo].[TimerJobHistory]
  where StartTime &gt; '2013-02-12 02:40:00' and EndTime &lt; '2013-02-12 03:55:00'
  --note that the TimerHistory timestamp is always in GMT!
  order by SecondsElapsed desc
$events= Invoke-SQLcmd -Server "NY-SRV-SQLPRD02" -Database SharePoint_2013_Farm "select  JobTitle,WebApplicationName,ServerName,DatabaseName, StartTime,EndTime,ErrorMessage from dbo.TimerJobHistory where Status=3 and StartTime between GETDATE() -1 and GETDATE()"

Then in PowerShell:

foreach($event_num in $event) 
{ 
 
 Invoke-SQLcmd -Server "NY-SRV-SQLPRD04" -Database MYdbreports "insert into [SharePoint].[TimerJobHistory]  ( Status,StartTime,EndTime,JobTitle,ServerName,DatabaseName,ErrorMessage ) values ($event_num.Status,$event_num.StartTime,$event_num.EndTime,$event_num.JobTitle,$event_num.ServerName,$event_num.DatabaseName,$event_num.ErrorMessage)"
 
 }

System characteristics of a well performing SharePoint Farm

Characteristics of a well-performing SharePoint Farm
Tempdb data files

  • Dedicated disks for the tempdb
  • Tempdb should be placed on RAID 10
  • The number of tempdb files should be equal the number of CPU Cores, and the tempdb data files should be set at an equal size

The size of the tempdb data files should be equal to the (Physical memory no. of processor). That’s the initial size of the tempdb data files.

DB management

  •  Enough headroom must be available for the databases and log files, plus enough capacity to keep up the requests
  • Pre grow all databases and logs if you can. Be sure to monitor the sizes so that you do not run out of disk space
  • When using SQL Server mirroring, do not store more than 50 databases on a single physical instance of SQL Server
  • Database servers should not be overloaded by using too many databases or data
  • The content databases must be limited to 200GB. Limit content databases to 200GB Indices must be Defragmented and rebuilt daily, if users can absorb the downtime required to rebuild
  • More than 25% disk space must be free.
  • Another instance of SQL must be created for data exceeding 5 TB.
  • When using SQL Server mirroring, more than 50 databases must not be stored on a single physical instance of SQL Server

Monitor the database server

Key performance counters to monitor are:

  • Network Wait Queue: at 0 or 1 for good performance
  • Average disk queue Length (latency): less than 5 ms
  • Memory used: less than 70%
  • Free disk space: more than 25%