Cameron's Blog

SharePoint 2013 - Internal Javascript Classes

Recently, we had a very strange issue with some customization failing and blocking built-in functionality.

The customization aside, we noticed this strange difference between our SharePoint environments.

SP JS files with different content on identical servers

So basically, these JS files are the same, yet the obfuscated member names of the class are different across environments (not all). The last modified dates of the files of one server were actually different from the other, but apart from this they were the same. Our other 2 environments had the same member names as 1 of these.

The reason for the obfuscated Javascript is because Microsoft generates these files using Scriptsharp.

The fact that they’re generated may be the reason they can be different. I don’t think they’re generating during the installation of your SP Farm, but that would explain why they can have different member names across environments. What I do think is more likely, is that they generate it on their side and it gets deployed as is to everywhere. But possibly the generation algorithm was changed and ended up naming these members differently between versions.

I still don’t get how both environments are supposedly the same SP Farm version, but oh well.

So let this be a warning if you intend to interact with these generated classes (or overwrite their functionality… as our offending customization did –_–) as you can’t be certain that the naming is gonna remain the same over time or be the same across environments.

SharePoint - Hosting a WCF Service With a SPContext

If you want to extend SharePoint, adding custom web services is one way to do it. There’s several requirements usually, and if one of them is the necessity for a SPContext, you can use the built in SharePoint WCF Service Factories.

There’s a couple of things to keep in mind when using Service Host Factories. They’re basically the programmatic alternative to the web.config configuration files. This has the benefit of not having to deploy or update a web.config file somewhere, but at the same time it has the downside of not even being able to quickly reconfigure something by putting a web.config next to the .svc file. It’s all code from here, no configuration.

Deployment

These .svc files can be deployed to the ISAPI hive folder, in their own subfolder preferably. They’ll be accessible by this url from any site collection in the farm:

  • http://{sitecollectionurl}/{subweburl}/_vti_bin/{pathToMySvcFileInTheISAPIFolder}/{myservicefilename}.svc
  • _vti_bin points to the ISAPI folder, so {pathToMySvcFileInTheISAPIFolder} would be the hierarchy of folders you used inside the ISAPI folder before you see your {myservicefilename}.svc file.

It is “web” aware, so your SPContext.Current.Web will be whatever web you called the service at. Keep this in mind if you want to locate artifacts in your web / sites, because this would mean you’ll have to call your service from the correct web to make them accessible.

Because the .svc file is deployed to the HIVE, you will need a SharePoint project in your solution to deploy the file with. This makes deployment itself really easy, you don’t even have to activate a feature or create an IIS site. SharePoint takes care of everything.

Creating the service

A service factory is specified in the .svc file. SharePoint provides 3 service factories for you to use, depending on the kind of service you want to create:

  • SOAP: MultipleBaseAddressBasicHttpBindingServiceHostFactory
  • REST: MultipleBaseAddressWebServiceHostFactory
  • ADO.NET Data Service : MultipleBaseAddressDataServiceHostFactory

I wanted to make a REST service, so I used the MultipleBaseAddressWebServiceHostFactory in my case. This means my .svc file looked like this:

1
2
3
<%@ServiceHost Language="C#" Debug="true"
Service="NameSpaceOfMyService.MyServiceClassName, $SharePoint.Project.AssemblyFullName$"
Factory="Microsoft.SharePoint.Client.Services.MultipleBaseAddressWebServiceHostFactory, Microsoft.SharePoint.Client.ServerRuntime, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

You’ll notice we’re using a VS Token in this file, to configure VS to replace the token in .svc files you can follow the instructions described on MSDN.

For it to work correctly, you’ll have to add some extra attributes to the class implementation of your service:

  • BasicHttpBindingServiceMetadataExchangeEndpointAttribute
  • AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)

Additionally, you can add the following attribute if you want to have the exception details shown when browsing to the service:

  • ServiceBehavior(IncludeExceptionDetailsInFaults=true)

Once that’s done you can start adding service contracts & service methods the way you normally do in WCF.

Security

The webservice is callable by anybody. You will not be blocked from the service because you do not have access to the SharePoint site that you called the service from.

You WILL be blocked from interacting with the SharePoint Artifacts at that location if the credentials you used do not have sufficient permissions on those SharePoint Artifacts.

So you’re responsible for handling security outside of SharePoint interacting code. This also means any SPSecurity.RunWithElevatedPrivileges code blocks as they basically allow anybody to run that piece of code. Even if they’re not even known inside SharePoint.

Service Methods

You’ll first have to create an interface with the [ServiceContract] attribute on the class and [OperationContract] attribute on the interface methods.

Next, you can have the .svc code behind class implement this interface

Updating SharePoint artifacts

If you want your service methods to expose functionality that makes changes to SharePoint artifacts (create a list, remove a list item, update list item properties) you will run into some issues.

  • GET request
    • This will give you errors regarding “unsafe updates”
      • This is to prevent an unsuspecting user to accidently execute code with unintended consequences (because of a link that was injected somewhere in the page).
  • POST action
    • This will give you errors because of the SharePoint Page Form Digest which helps prevent CSRF (cross site request forgery)
      • This is to prevent a user’s credentials to be used in a different domain than the page the user visited.

The work arounds in these cases is to use web.AllowUnsafeUpdates = true and SPUtility.ValidateFormDigest()

The truth is that for either of these, in the case of Web Service methods, you’re kind of stuck. Sure, in your GET method you can set web.AllowUnsafeUpdates to true, but you don’t always control the creation of the SPSite/SPWeb objects being used by the underlying SharePoint code.

The same goes for the POST method, you can go get a digest token from SharePoint and validate that before calling the webservice, but that’s putting an extra burden on the client calling the service.

In essence, these security measures have been in the context of public facing sites / web services. SharePoint is often used for intranets and in my case, this is where we were developing these services for.

Never trust user input. Even in intranet situations.

GET Method

Let’s assume your GET method needed to do something, that web.AllowUnsafeUpdates = true doesn’t help you do. How does SharePoint know it’s a GET request ? It first checks to see if there’s an SPContext… Yes, the very reason we’ve built our service like this was to have an SPContext in the first place. And now it’s basically the reason we cannot execute our code from inside a GET request.

Use case: Our situation needed to have a GET method that could be called upon a certain trigger to archive a SharePoint artifact. In this case, we could validate the conditions that would mean the artifact had to be archived inside the service method. This means any malicious use of the service could not be used, as it was just a trigger to CHECK if it needed to be archived and to then also archived it, worst case, we were gonna be doin a lot of unnecessary checks if a random person was calling the service method.

Yet, the artifact we were dealing with was a DocumentSet and the underlying code was recreating an SPSite / SPWeb object so we couldn’t use the recommended method of setting web.AllowUnsafeUpdates = true .

It’s probably a good idea to wrap this in an object implementing IDisposable and saving the context in a back variable so you can put it back after.

POST Method

As we saw before, the POST actions get special treatment too by means of the Form Digest token.

This is because it’s almost naturally assumed that any POST request in a SharePoint context will happen from a form. Yeah right. There’s no such thing like calling WCF services from outside of a web context, like, an office app ? Think again.

Use case: We needed to allow a file to be upload to a predetermined library. We already know that only authorized people can do these things, both in SharePoint and from our service method. That’s the only security we were gonna have on the service method. Since we were not particularly concerned for any CSRF inside our intranet, we were not gonna validate a form digest that the client had to request himself explicitly (on web pages it is served along by SharePoint for you), we were gonna disable the check.

This also happens with web.AllowUnsafeUpdates = true, for some reason. In our case it was sufficient, but if not we’d had to have to use the same workaround as explained below.

Security issues workaround

The workaround was to unset our SPContext, which meant SharePoint would think we were executing from an application rather than an http context, and would no longer block changes to the SharePoint artifacts.

1
HttpContext.Current = null

Keep in mind I don’t really approve of this trick. I just don’t see a good way that was made available by SharePoint out of the box. For GET requests, the alternative isn’t sufficient in some cases, for POST requests, it’s just dumb to have to make 2 WCF calls to get the same thing done.

But if you’re aware of the risks and make sure you’re prepared for any “malicious” service calls, as in, don’t make your service methods to any irreversible things in case they we’re executed in unintended situations, I believe it’s a good alternative.

Using multiple service contracts within the same service codebehind

This title may not be so clear at first. The idea is to reuse the same listenUri endpoint (base url) to host multiple service methods, that have been defined in different service contracts.

What does this look like in the codebehind of your .svc file (the .svc.cs file) ? Like so:

1
2
3
4
public class MyService : IMyServiceContractForHR, IMyServiceContractForFinance
{

}

Pretty straight forward. Except….

Yeah, SharePoint ofcourse. If you look in the implementation of (at least) the MultipleBaseAddressWebServiceHost you’ll find that it has a method for adding the default endpoints. It even has a property especially for holding all the Service Contracts it detected that were being implemented by the service.

And than it takes the first one, adds endpoints for it and call it a day.

I honestly don’t know if there might be a technical reason for this, but if vanilla WCF allows you to do this, than why doesn’t a SharePoint specific service host factory allow you to do it ? And why in the frigging **** do they have to make it all internal every frigging time. I wasn’t gonna bother creating the endpoints as closely to the real thing as possible (I’m no WCF expert believe it or not), which may not even be totally necessary. But you know, this whole service was tested and prepered under it’s current configuration. It was meant to implement multiple contracts. It was gonna implemented multiple contracts.

Enter: The interface that inherits from all.

Yeah, apparently this works. Just have a different interface, IMyService, that inherits from all the Service Contracts you want to implement and you’re done. Because, yeah, we just need to make sure there’s only one Service Contract interface on that service class. sigh

Creating a custom service host factory

If you’re using the SharePoint provided service host factory it means you’re using the service as it’s configured by SharePoint:

  • 3 endpoints
    • /
    • /anon
    • /ntlm
  • no /help page to browse to
  • default 2mb upload limit

If you want to change these properties of your service, remember, placing a web.config near the .svc file will not make a difference. You’ll have to subclass the service host factory and change the endpoints that have been created by SharePoint base class and update their settings in the OnOpening event from the service host.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class CustomMultipleBaseAddressWebServiceHostFactory : Microsoft.SharePoint.Client.Services.MultipleBaseAddressWebServiceHostFactory
    {
        protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
        {
            return new CustomMultipleBaseServiceHost(serviceType, baseAddresses);
        }
    }

    public class CustomMultipleBaseServiceHost : Microsoft.SharePoint.Client.Services.MultipleBaseAddressWebServiceHost
    {
        public CustomMultipleBaseServiceHost(Type serviceType, params Uri[] baseAddresses)
            : base(serviceType, baseAddresses)
        {
        }
    }

This also means you’ll have to change the reference to the service host factory used by your service in the .svc file:

1
2
3
<%@ServiceHost Language="C#" Debug="true"
Service="NameSpaceOfMyService.MyServiceClassName, $SharePoint.Project.AssemblyFullName$"
Factory="NameSpaceOfMyServiceHostFactory.MyServiceHostFactoryClassName, $SharePoint.Project.AssemblyFullName$" %>

Updating endpoint configuration

The best way to updating the endpoints created by SharePoint by default is to do it in the OnOpening event. I prefer doing this to stay as close as possible to the default SharePoint setup opposed to clearing the list of default endpoints and recreating them manually with your preferred way of configuration.

1
2
3
4
5
6
7
8
9
10
11
protected override void OnOpening()
{
    base.OnOpening();

    foreach (ServiceEndpoint endpoint in this.Description.Endpoints)
    {
        EnableHelpPageOnWebHttpBehavior(endpoint);
        IncreaseFileUploadSize(endpoint);

    }
}

You need to call the base method first because that one will take care of creating the default endpoints.

Enabling the service help page

1
2
3
4
5
6
7
private static void EnableHelpPageOnWebHttpBehavior(ServiceEndpoint endpoint)
{
    foreach (var webHttpBehavior in endpoint.EndpointBehaviors.OfType<WebHttpBehavior>())
    {
        webHttpBehavior.HelpEnabled = true;
    }
}

Increasing the upload limit

1
2
3
4
5
6
7
8
private static void IncreaseFileUploadSize(ServiceEndpoint endpoint) {
    var customBinding = endpoint.Binding as WebHttpBinding;
    if (customBinding != null)
    {
      customBinding.MaxBufferSize = Int32.MaxValue;
      customBinding.MaxReceivedMessageSize = Int32.MaxValue;
    }
}

Common Error

If you forgot to add the [ServiceContract] attribute to your service contract interface, you’ll get an error when creating the service host (the firs time you call the service) saying something like: no service contract.

Yeah, that’s actually a very accurate message, for once. Still it took me a while to figure out the attribute was missing. So consider this an addition mostly for my own benefit :).

Tutoro - Automatic Chain Oiler

My SV 650N in France

Introduction

Ever since I bought my SV 650 in Oktober 2013, I researched a lot about how to properly maintain your motorcycle. Cleaning and lubricating the chain is a big part of that.

Then came the time to apply chain lube on my motorcycle. Guh, that was a big mess. I’ll admit I didn’t have a rear wheel paddock stand at the time to lift it so I could move the chain while applying it, but damn, it just didn’t seem very efficient either.

Because, basically, what you do is apply a large enough amount at once so that you won’t have to redo it too soon, causing most of it to just fling of and make your rim and swing arm all black.

I had to find something better. Then I found chain oilers, and more specifically, I found the Tutoro Automatic Chain Oiler.

Tutoro Automatic Chain Oiler

Tutoro Automatic Chain Oiler installed on SV 650N

The reason I went for the Tutoro and not say a Scottoiler (arguably a lot more popular) is that it’s just so simple.

The automatic part means you don’t have to manually close and open the valve to flow oil when riding. When the bike is not moving, it doesn’t flow oil, when it is moving and hits a bump, it does flow oil. This is regulated by a weight in the reservoir that moves up & down because of bumps in the road (so there is a benefit to Belgian roads after all..) and the valve determines how much oil you flow each time it is released by the weight. Or at least, that’s my understanding. With the manual oiler you would have to open and close the valve after you’re done riding, I think.

Notice that there’s no eletrical wiring involved, or anything else that’s directly connected to your motorcycle.

And it does flow, a lot. I almost have it fully closed most of the time because, I guess, there’s just a whole lot of bumps in the road in Belgium. In the picture, you can see I’ve got the valve open just a few degrees (counter clockwise). Fully closed means having the dot at 12 o’clock.

Valve during normal operation

After cleaning my chain I’ll do 1 ride with the valve open a few more degrees for added lubrication, but when I did this, I ended up with the specks on my rim again, so it may not even be really necessary :).

Rim with oil specks from excess chain oil

Looking at the chain itself, honestly, I couldn’t tell if it’s too much or just enough, only that it looks wet and when you swipe it with your finger, your finger comes of all black :).

Chain with chain oiler

The oil that Tutoro uses is a little different than the lube you spray from a can. It’s a lot more fluid, which from what I gather is better for the chain opposed to more solid lube, but it has the tendency to fling more easily, which is also why you would need something like a chain oiler to make it worthwhile.

Ideally, the chain oiler applies barely enough oil while on the road (with a warmed up chain, which is a plus) to counter the fling (and wash off due to rain) to keep the oil on the chain at a constant level. This is opposed to what you do when you lube it from a cain, i.e. apply a lot at once so that after x amount of km’s you’ll still have a enough on it.

Also, it’s arguably better for the life of the chain if it has a constant level of lubrication which attracts less dirt etc than the a-lot-almost-nothing cycle that you get with chain spraying (and a lazy habit of not cleaning and lubricating after every ride). And god, it’s just so much easier.

Installation

It’s a fully independent system. Installation just means fixing the reservoir (upright), running a tube to your swingarm and pointing the nozzle next to the rear sprocket (preferable at the center as shown in the picture). And in case you were wondering, yes, the zip wires holding the reservoir to the passenger foot peg are strong enough. Holding the reservoir mount I can move my bike…

Tubes running from reservoir to swing arm

I attached the tubes and helix (has more metal wiring to more accurately point the nozzle at the sprocket) using only the provided zip wires, which work great. But in the future I’d like to get a more permanent solution, either welding or glueing something to the swing arm to slide the tube in.

Nozzle alignment to sprocket

The thing you have to think about most is where to put the reservoir. I attached mine on the passenger foot peg and it’s kind of in the same path as where the swing arm would go should the rear shock obserber be fully compressed (does the swing arm really go up that far?). But I haven’t seen any marks of the swing arm touching or moving the reservoir yet, so I’m guessing all’s good. Either way, I try not to worry about it too much :)

Reservoir alignment

Economy

I’ve also found this is a lot more economic on your oil/lube. The reservoir has a 45mm diameter and is 100mm tall which translates to roughly 150ml volume. The tube running from the reservoir to the chain in my case took about as much as well. I’ve had to refill it for the first time now and I gotta say I’m quite impressed.

It takes about 125ml for 3500 kms (2174 miles) over 7 months time period (occasional rain) or 75ml for just 3000 kms (1864 miles) over 3 weeks time period (very little rain). The first is the refill before the tour in France and the latter is what you can see is gone after the 2400 km tour in France (not a lot of bumpy roads there) as in the picture showing the reservoir (half empty). As the pictures will show, I think my chain is lubricated enough, and maybe even a little bit too much.

Tutoro Automatic Chain Oiler installed on SV 650N

The 500ml refill costs just £6.5, but you don’t have to use theirs. The kit itself is also pretty cheap at £65 (€ 82) for the deluxe (full) package.

Experience

After the tour in France, my twin nozzle did lose one leg/arm. I’ll admit I didn’t check whether it was still aligned properly each day, or not at all actually, I only noticed when I got home :). Which brings me to another major plus for a chain oiler system. You don’t have to worry about lubricating your chain while on tour / vacation!

I have fitted the single nozzle now and we’ll see how long that will hold up.

Ow, apparently you also have to specifically mention to your garage that they DON’T have to lube the chain for you after servicing. In that case I still ended up with a black rear wheel rim –_–.

So all in all, great stuff!

SharePoint 2013 - Search - Content Enrichment - Basics

Content Enrichment Service – Output Properties

We’ve implemented a SP 2013 Content Enrichment service at a client in the last week and I’d like to share some things you need to watch out for when creating your own.

Especially since there’s a lot of documentation out there, and a lot of documentation that’s missing. More specifically how the Search Engine deals with edge cases when calling and processing the result of your service.

Optional or required ?

For instance: OutputProperties of your service, do you have to return them ? Are they merely a guideline and can you still return others (wouldn’t be logical, but still)? What about returning managed properties that already exist on the record?

A colleague of mine experimented a little and noticed the following things:

  • The specified output properties are optional
    • You are not required to return all of the properties listed.
  • The specified output properties are limiting
    • If the managed property is not listed, you are not allowed to return it
      • I believe an error will be logged in the ULS

Returning a managed property that already has a value on the item ?

1
Microsoft.Ceres.Evaluation.DataModel.Types.SchemaException: Cannot add field MyManagedPropertyName to bucket, it already exists.

That would mean a solid “no”. Although the disassembly has an if case that might allow you to override it, but I can’t make much sense of the code to say when it would work. Seems to depend on TypeConversions.IsCompatible.

Output property type

The managed properties themselves, I had to dig into the DLL’s to figure that out. The AbstractProperty class has a static method call that lists the supported property types (as the actual Property types are generic Property types). These are the supported properties :

1
2
3
4
5
6
7
8
9
10
  Property<string>
  Property<int>
  Property<long>
  Property<bool>
  Property<double>
  Property<Decimal>
  Property<DateTime>
  Property<Guid>
  Property<byte[]>
  And their List<T> versions (Property<List<string>>,...)

Using any other type in your Content Enrichment service will compile, but will throw errors on the Search Engine side.

You cannot use just any type for a specific Managed Property. If your Managed Property is registered as type bool, your enriched Property will also have to be of type bool (Property). Again, an appropriate error will be logged in ULS saying that Managed Property expected to be of type T while the type returned was type Z.

Registering the web service

Some errors are thrown at time of registrating the service with the PowerShell cmdlets like “Managed Property X does not exist” but the service also logs it’s configuration in ULS:

  • Product: SharePoint Server Search
  • Category: Administration

Example:

1
2
3
4
5
AddProperty: Adding property 'cp_ContentProcessingEnrichmentServiceOutputFields_0' as 'MyManagedPropertyName'.

SetStringProperty: Changing property 'cp_ContentProcessingEnrichmentServiceEndpoint' from '' to 'MyContentEnrichmentServiceUrl'.

SetStringProperty: Changing property 'cp_ContentProcessingEnrichmentServiceTrigger' from '' to 'MyTriggerExpression'.

Debugging

This is actually related to anything to do with your custom Content Enrichment Service.

If you want to find CEWS related errors in the ULS logs, they are logged as medium/high (so far as I could tell) and you can see them by filtering on

  • Message contains “ContentEnrichmentClient”
  • The errors are thrown by ContentProcessingEnrichmentClientEvaluator
  • The errors are of type Microsoft.Ceres.Evaluation.DataModel.EvaluationException

Your service is called by an instance of Microsoft.Ceres.ContentProcessing.Evaluators.ContentEnrichmentClientProducer.

None of the errors will be logged as descriptively in the Crawl Log. They will merely say “Failed to process the results returned by the content processing enrichment service” or some such.

Which items were enriched ?

There’s no easy way to just get all the records that were touched by your Content Enrichment Service as far as I know. We’ve added a managed property in the sense of “IsEnrichedByMyService” of type bool and update that. This way you can also find the amound of successfully enriched items, as they don’t get a seperate tab in your Crawl Log like the errors do.

Performance

To quickly evaluate the performance of search calling your Content Enrichment Service you can filter ULS on:

  • EventId: b4ly
  • Message contains “Path to your Content Enrichment Servce

That will show you all the “Leaving monitored scope” statements that the Search Engine outputs when calling your Content Enrichment Serivce.

Remember you want to keep these as low as possible to not add to much time to the crawling. These calls are synchronous so the search engines blocks until you return from your service or the timeout is reached on every single item that passed the Content Enrichment Service trigger.

PowerShell - Search Crawl History

I’ve been working with Search in SP2013 lately and maintaining search involves looking in the Search Administration pages, that admittedly give you a lot of information. Sometimes I’d like an easier way of getting it, though. As did my colleague, for whom I wrote this particular script.

Sadly, some of the things that the Search Administration UI shows you, like the Crawl History, do not have a direct PowerShell equivalent.

After reflecting on the logic behind the page, I came up with this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$numberOfResults = 10
$contentSourceName = "MyContentSource"

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Server.Search.Administration")

$searchServiceApplication = Get-SPEnterpriseSearchServiceApplication
$contentSources = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $searchServiceApplication
$contentSource = $contentSources | ? { $_.Name -eq $contentSourceName }

$crawlLog = new-object Microsoft.Office.Server.Search.Administration.CrawlLog($searchServiceApplication)
$crawlHistory = $crawlLog.GetCrawlHistory($numberOfResults, $contentSource.Id)
$crawlHistory.Columns.Add("CrawlTypeName", [String]::Empty.GetType()) | Out-Null

# Label the crawl type
$labeledCrawlHistory = $crawlHistory | % {
 $_.CrawlTypeName = [Microsoft.Office.Server.Search.Administration.CrawlType]::Parse([Microsoft.Office.Server.Search.Administration.CrawlType], $_.CrawlType).ToString()
 return $_
}

$labeledCrawlHistory | Out-GridView

Ow yes folks, Microsoft.Office.Server.Search.Administration.CrawlLog.GetCrawlHistory is a public method (I was almost sure I’d run into an internal one when reflecting the dll). You give it the number of results you want to get and the contentsource id (which is a simple int number actually).

The first two variables are parameters further down in the script. You show the crawl log for one contentsource specifically, and you have to specify how many results you want to return (0 returns no results, and -1 doesn’t work, as it get’s parameterized directly into the SQL statement, giving you a nice error that SELECT TOP N or some such can’t be negative).

I’m assuming you’re running in a SharePoint Management Shell. I load the assembly through the deprecated Reflection method, because frankly, it’s the only one that works consistently (I’m looking at you Add-Type).

There’s some logic in there to show the crawl type by it’s string representation, instead of the int that is inside the DataTable of results.

At the end, I pipe the results to Out-GridView which you need to have the PS ISE installed for I think. And you should, as it’s a nice way to display table results. And the editor gives good intellisense too :).

Enjoy!

KnockoutJS - AutoComplete

I’ve been looking into Knockout JS recently and wanted to see how it could be integrated with JQuery (and JQuery.UI) to have an autocomplete field.

Some of the examples I found were doing what I wanted, but too complicated for me to understand with my limited JavaScript experience or were just not very generic at all.

I also wanted to still have the original object supplying the label after a selection was made. This can be helpful to supply values to other fields afterwards, when you simply can’t regenerate it from the label alone.

I did find one example that I managed to change to be quite “simple” and generic.

JSFiddle: Full code & working example

I’ll run you through the sourcecode step by step:

Original Data

We have a datasource that will supply the option list that will be used for the autocomplete functionality:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// Array with original data
var remoteData = [{
    name: 'Ernie',
    id: 1
}, {
    name: 'Bert',
    id: 2
}, {
    name: 'Germaine',
    id: 3
}, {
    name: 'Sally',
    id: 4
}, {
    name: 'Daisy',
    id: 5
}, {
    name: 'Peaches',
    id: 6
}];

Currently it’s just an array containing objects with a nameand an id property.

JQuery.UI Autocomplete widget

The original data array itself can be of any structure, but JQuery.UI’s autocomplete widget expects an array of strings as a minimum, they will be used for the label & value both, or you can supply an array of objects that have a label and a value property. Since we want a different value for the option’s label and value we will use this object array. The label and value properties are mandatory, but we are free to add our own properties, which we will do using the following function to convert our initial data array to a proper JQuery.UI autocomplete widget’s source array:

1
2
3
4
5
6
7
8
9
function (element) {
    // JQuery.UI.AutoComplete expects label & value properties, but we can add our own
    return {
        label: element.name,
        value: element.id,
        // This way we still have acess to the original object
        object: element
    };
};

As you can see, we’ve added a source property to hold our original object.

ViewModel

As you may know, KnockoutJS is a MVVM framework, and here is our ViewModel that will be used for the autocomplete widget:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function ViewModel() {
    var self = this;

    self.users = remoteData;

    self.selectedOption = ko.observable('');
    self.options = self.users.map(function (element) {
        // JQuery.UI.AutoComplete expects label & value properties, but we can add our own
        return {
            label: element.name,
            value: element.id,
            // This way we still have acess to the original object
            object: element
        };
    });
}

As you can see, it uses our previous function to convert the original data array to an options list. It also uses the KnockoutJS Observable to hold the selected value. We use the observable because we may want to know if it updates.

KnockoutJS’s observables are implementations from the Observable pattern and it will automatically let any instances that depend on the Observable know if it’s value is updated.

We use the self property for a few reasons, it’s best explained in this stackoverflow answer. In short: it allows use to access the ViewModel from inside function scopes where this would refer to the function being implemented instead of the parent object(the ViewModel).

KnockoutJS – Custom binding

To allow us to generically pass in the data for the Autocomplete Widget in the correct KnockoutJS manner, we will implement a custom binding.

View binding

I think it’s easier to understand it’s functionality when you see how it’s being used:

1
2
3
4
<input type="text" data-bind="autoComplete: { selected: selectedOption, options: options }" />

<!-- Debugging -->
<p data-bind="text: selectedOption().object.name"></p>

The input textbox is will be converted in the AutoComplete Widget by JavaScript code later.

The data-bind tag is KnockoutJS’s declarative way of binding ViewModel to the View (being HTML tags).

In the data-bind we can specify a binding handler which in this case is the autocomplete binding handler. Built-in handlers are for example text, which will just put the ViewModel’s value inside the bound HTML tag as text.

Our custom binding handler will be a bit more complex. It takes a parameter that is an object with 2 properties: selected and options. The selected property must be a KnockoutJS Observable that will be updated with the option that was selected. The options property will be the JQuery.UI AutoComplete’s source array with the options labels and values.

The properties passed are properties on the ViewModel, being selectedOption and options.

The binding handler

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
ko.bindingHandlers.autoComplete = {
    // Only using init event because the 
  // Jquery.UI.AutoComplete widget will take care of the update callbacks
    init: function (element, valueAccessor, allBindings, viewModel, bindingContext) {
        // valueAccessor = { selected: mySelectedOptionObservable, options: myArrayOfLabelValuePairs }
        var settings = valueAccessor();

        var selectedOption = settings.selected;
        var options = settings.options;

        var updateElementValueWithLabel = function (event, ui) {
            // Stop the default behavior
            event.preventDefault();

            // Update the value of the html element with the label 
            // of the activated option in the list (ui.item)
            $(element).val(ui.item.label);

            // Update our SelectedOption observable
            if(typeof ui.item !== "undefined") {
                // ui.item - id|label|...
                selectedOption(ui.item);
            }
        };

        $(element).autocomplete({
            source: options,
            select: function (event, ui) {
                updateElementValueWithLabel(event, ui);
            },
            focus: function (event, ui) {
                updateElementValueWithLabel(event, ui);
            },
            change: function (event, ui) {
                updateElementValueWithLabel(event, ui);
            }
        });
    }
};

This is a lot to take in at once. You should focus on the following:

  • valueAccessor
    • Represents the passed in argument, being out object containing the observable for the selected options and the options array
  • updateElementValueWithLabel
  • $(element).autoComplete(…)
    • This is how we convert the textbox to the JQuery.UI Autocomplete widget.
    • We override the default functionality in case an option is selected, the focus in the options list changes or the textbox value is changed.
    • Default functionlity is to place the value of the option list in the textbox (which is strange that it doesn’t update it with the label).

Value accessor

1
2
// valueAccessor = { selected: mySelectedOptionObservable, options: myArrayOfLabelValuePairs }
var settings = valueAccessor();

The valueAccessor parameter of the binding deserves some explanation. I think typically it’s not a complex object like in my case. So far I’ve seen people use multiple bindings to pass extra values to their binding handler. I don’t think it’s very clean so I just pass one object, which has multiple properties for representing all the parameters. Nothing is static typed so using this approach or the multiple binding’s is practically the same, in my opinion.

So now that we have our input, we read our individual parameters from it.

1
2
var selectedOption = settings.selected;
var options = settings.options;

Autocomplete widget

1
2
3
4
5
6
7
8
9
10
11
12
$(element).autocomplete({
    source: options,
    select: function (event, ui) {
        updateElementValueWithLabel(event, ui);
    },
    focus: function (event, ui) {
        updateElementValueWithLabel(event, ui);
    },
    change: function (event, ui) {
        updateElementValueWithLabel(event, ui);
    }
});

This is pretty straigthforward, source property takes the list of options and then we override the events on the widget.

Update Element Value With Label

1
2
3
4
5
6
7
8
9
10
11
12
13
14
var updateElementValueWithLabel = function (event, ui) {
    // Stop the default behavior
    event.preventDefault();

    // Update the value of the html element with the label 
    // of the activated option in the list (ui.item)
    $(element).val(ui.item.label);

    // Update our SelectedOption observable
    if(typeof ui.item !== "undefined") {
        // ui.item - label|value|...
        selectedOption(ui.item);
    }
};

This function will stop the default behavior of updating the textbox with the option’s value, on the ui.item object, we want to use its label instead.

Finally, we update the selectedOption Observable with the whole item from the option array, containing the mandatory label & value properties, as well as our own object property containing the original data item.

KnockoutJS

Don’t forget the mandatory KnockoutJS initialization code:

1
2
3
$(function () {
    ko.applyBindings(new ViewModel());
});

Full Code

HTML

1
2
3
4
<input type="text" data-bind="autoComplete: { selected: selectedOption, options: options }" />

<!-- Debugging -->
<p data-bind="text: selectedOption().object.name"></p>

JavaScript

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
ko.bindingHandlers.autoComplete = {
    // Only using init event because the Jquery.UI.AutoComplete widget will take care of the update callbacks
    init: function (element, valueAccessor, allBindings, viewModel, bindingContext) {
        // { selected: mySelectedOptionObservable, options: myArrayOfLabelValuePairs }
        var settings = valueAccessor();

        var selectedOption = settings.selected;
        var options = settings.options;

        var updateElementValueWithLabel = function (event, ui) {
            // Stop the default behavior
            event.preventDefault();

            // Update the value of the html element with the label 
            // of the activated option in the list (ui.item)
            $(element).val(ui.item.label);

            // Update our SelectedOption observable
            if(typeof ui.item !== "undefined") {
                // ui.item - label|value|...
                selectedOption(ui.item);
            }
        };

        $(element).autocomplete({
            source: options,
            select: function (event, ui) {
                updateElementValueWithLabel(event, ui);
            },
            focus: function (event, ui) {
                updateElementValueWithLabel(event, ui);
            },
            change: function (event, ui) {
                updateElementValueWithLabel(event, ui);
            }
        });
    }
};

// Array with original data
var remoteData = [{
    name: 'Ernie',
    id: 1
}, {
    name: 'Bert',
    id: 2
}, {
    name: 'Germaine',
    id: 3
}, {
    name: 'Sally',
    id: 4
}, {
    name: 'Daisy',
    id: 5
}, {
    name: 'Peaches',
    id: 6
}];

function ViewModel() {
    var self = this;

    self.users = remoteData;

    self.selectedOption = ko.observable('');
    self.options = self.users.map(function (element) {
        // JQuery.UI.AutoComplete expects label & value properties, but we can add our own
        return {
            label: element.name,
            value: element.id,
            // This way we still have acess to the original object
            object: element
        };
    });
}

$(function () {
    ko.applyBindings(new ViewModel());
});

SharePoint 2010 - SPField and SchemaXML Changes

Backstory

So, a long time ago we ran into an issue where, if we updated the site content type and pushed the changes to the list, we would lose our field title translations. It would just display whatever was the english translation, but for every language.

Likewise, when updating a site column, and pushing down, we got our translations (resources) back, but lost any additional field modifications (required, showindisplayform,…).

Not fun. At all.

We investigated the issue but couldn’t come up with anything better (deadlines, deadlines, deadlines!) than making our changes to the site content type (without pushing down) and then going over each content type usage and making the change there as well. This still broke the resources but we kept our modifications.

Fixing the resources involved updating every field instance (on the list) that was part of the content type we were updating. Also very much NOT fun. In our case, we had a lot of subwebs all setup the same which would result in 23 webs all needing to have their list fields updated. This takes a while, apparently.

Cause

Fast forward to now and we’ve finally figured out what the root cause of this issue is (and how to permenantly fix it).

The reason our translation failed is because during the life-cycle of the SharePoint application deployed at the customer, the SPField’s had their SchemaXML’s updated manually. As in, reading out the SchemaXmlWithResourceTokens property, updating the DisplayName, Description and Group properties with a resource token (which it didn’t originally have), and updating the field (which is actually unnecessary).

I don’t mean to point fingers but this is legacy code and we’ve always wondered if it wasn’t fishy. Now we know.

Running some tests of our own, we managed to reproduce the above symptoms by making changes to the SchemaXml of fields.

This brings you in a situation where:

  • After the SPField SchemaXml has been updated, none of the parent SPField (the site column) changes are pushed to their children (list fields) any longer.
  • Updating the parent Content Type and pushing the changes down breaks these translations (giving you the english translations, all the time). This however, does fix the parent SPField not pushing down its changes.
  • Updating the parent SPField does push down its changes, but removes any list level changes that were made before.

The last 2 points can be alternated and they’ll keep giving the same effect (like a loop, the one breaks the fixes of the other). Thats the situation we were in.

Solution

To get out of this situation you need to remove all the fieldLinks on the parent Content Type for the affected fields, add them again and push the changes down. All your data is still there, no worries. What you do lose is all the local list level changes (if you made any). Updating your parent SPField with the correct configuration and pushing down puts an end to that.

Your translations are there again and you can make changes to the parent field/content type without breaking anything.

Conclusion

Stay the **** out of the SPField.SchemaXml if you want to have any hope of retaining your sanity while performing maintenance to the application afterwards. No. Seriously.

I’ll add a SharePoint Visual Studio solution that you can deploy to check this for yourself (it has all the required code in features that you can just activate to see the effect for yourself).

Investigation

Here’s the steps we used to test this behavior:

Base

  • Given:
    • A field
      • Provisioned from CAML with resource keys
    • A contentType
      • Provisioned from CAML to include the field as a fieldLink
    • A list
      • Based on the previous contentType -The situation is as expected:
    • Field titles are translated in the forms of the list.

Change Set 1

  • Update:
    • A new field is added through code without resource keys
    • This field is also added as a field link to the content type
  • This situation is as expected:
    • Field exists in the list
    • Field title is not translated across UI display language changes.

Change Set 2

  • Update:
    • Add resource keys to the field
      • We add the resource keys by changing the SchemaXmlWithResourceTokens in code.
  • The situation is as expected:
    • The field title is translated.

Change Set 3

  • Update:
    • Make the field in the rootWeb required
      • We update the site column field and update with pushing the changes to the list
  • The situation is NOT as expected
    • The field is not required in the list

Change Set 4

  • Update:
    • Make the fieldLink on the content Type required
  • The situation is as expected
    • The field in the form is required

Change Set 5

  • Update:
    • We call update on the rootWeb contentType and push the changes to the list (regardless if any changes were made).
  • The situation is NOT as expected
    • The field is no longer required
    • The field title is no longer translated

Change Set 6

  • Update:
    • We call update on the rootWeb field and push the changes to the list (regardless if any changes were made).
  • The situation is NOT as expected
    • The field is no longer required
    • The field titles are translated again

Change Set 7

  • Update:
    • Make the field in the rootWeb required again
  • The situation is as expected
    • The field in the list is also required
    • The field titles are translated again

Change Set 8

  • Update:
    • We call update on the rootWeb field and push the changes to the list (regardless if any changes were made).
  • The situation is not as expected
    • The field is still required
    • The field titles have lost their translations again

SharePoint 2010 - Maintenance

How do you handle changes to your installed SharePoint environment ? What approach do you take to deploying these changes ? How do you best implement them ? What pitfalls may you encounter ? How can you avoid issues later on ?

All of these questions I hope to answer here, to guide any SharePoint developer facing these challenges.

First deploy

You’ve had the opportunity to live the dream and built a SharePoint application from scratch, clean.

Your server is set up, your environment is installed along with your solutions and the application is ready to go.

This is the base you will be maintaining, with bug fixes, change requests or just simple improvements. Maybe you’re just adding features that are in the pipeline already. Let the fun begin.

All about changes

There are several different ways to look at the changes you can make to your SharePoint environment, if you categorize them by how they are picked up by the system or what part of the SharePoint environment they affect, ie. their impact.

Impact

The impact of a change to the existing environment will prove to be a good factor of assessing the risk in deploying the release. It will also be a necessary to know the impact so you can decide on how to implement the change.

Let’s start with the beginning.

Primer

How do you interact with SharePoint from a SharePoint solution package (wsp)?

SharePoint is big and you are allowed to do things in several ways. Some things only have one way for you to do it. In the end it comes down to this:

  • CAML (declaratively through XML)
  • Code

I’ll go in the specifics of implementing changes later, but for now, let’s talk about the differences between these two approaches.

CAML

CAML doesn’t let you do everything but the things it can do are easier and cleaner. It almost feels like configuration, which basicly it is.

CAML is unfortunately the trickiest part. Some of the changes you make here on an existing application are picked up on the fly, but for some of them you have to tell the system to update itself.

For instance, what happens when you delete a fieldLink from a content types’ elements.xml ? Does it get applied to your SharePoint environment right away ?

The answer here is: No, it doesn’t get applied to the SharePoint environment automatically. In fact, you shouldn’t even be making any changes to it after first release according to Microsoft themselves and this is backed up by someone else’s excellent investigation. Similarly this can be extended to a fields’ elements.xml.

On the other hand, CAML that deploys Custom Actions can be updated on the fly.

Code

Code on the other hand is a lot easier to deploy. You do an Update-SPSolution on your new WSP and SharePoint will use the new dll and therefor the new code. This is what life should be like! But, as always, there are always some exceptions to this, Timer Jobs are one of them. They require that they be reinstantiated before any of the new Timer Jobs will use the updated code.

State

Now that we’ve established that some changes will be recognized automatically and some will not, we can look into the underlying reason. When you look closer, you’ll notice that all the things that cause you trouble in making SharePoint recognize your changes are nearly always because the way they live in the SharePoint environment is in the form of an Instance. They are living objects.

  • Timer Jobs
  • Content Types
  • Fields
  • Sites
  • Webs
  • Lists
  • Views
  • WebParts

All of these items are examples of SharePoint artifacts that are instantiated when used in the SharePoint environment and they have to be either modified or recreated before any changes you’ve made will be manifested.

Changing State

All of the aforementioned artifacts live in the SharePoint environment as instances. This implies they have a state, and this is the underlying reason they are thougher to change and any changes to these instances have a bigger impact on your upgrade. That is if you want to actually manifest the changes right away. Like I mentioned before, some changes can be made but won’t actually be picked up before you explicitely make it so.

Provisioning Changes

Changes to these instances are the clumsiest. You will have to write code to make that specific change. This can be either CAML or actual C# code. Either way, it’s overhead.

Why do I call it overhead ? Well, CAML allows you to declaratively deploy Content Types, Fields and so on. But it doesn’t allow you to make a change in this same CAML that SharePoint will apply to the existing artifacts.

Some things you can alter through Feature Upgrade’s CAML, others you’ll have to write additional C# code that you will trigger through the Feature Upgrade CustomUpgradeAction.

After you’ve made a dozen or so of these changes to your SharePoint Solution you’ll start to see the difficulty in maintaining this. If you want the CAML way for the initial deploy, you’ll have a big list describing all your fields and your content types and their properties (which field is required, which field is added to which content type, which field is shown in which form). When you start having changes, implemented through Feature Upgrades, you’ll have to make a “merged” view of these CAML files and any changes you made in code before you can see the actual value of each property of each artifact.

One guy I know of seems to feel the same way and he made a project called SPGenesis that will allow you to manage Fields, Content Types and even List Instances all from their own code file, providing you with only one location where you have to make all your adjustments. Use at your own risk, though. But essentially this is a genius solution to the problem. You’ll see a resembles with the next point in this article but this framework basically allows you to set your properties, provision them, make changes, provision them, all with the exact same code. No more overhead. You change a field’s required attribute and re-activate a feature and it’s done.

I can’t overstate the importance of State, or can I?

Although most of the changes to instances are of the nature I described earlier, there are exceptions.

Some of these can be unburdened of the consequences state brings along.

WebParts and Timer Jobs for instance. These two are a bit deceptive. Although they live as instances, they actually never change, now do they ? When you deploy them, do they grow larger because of added content ? Do they change in any way over time ? If they do in your case, than skip ahead, but in most cases they will not. They are faking their stateness, as the only reason they are instantiated is to perform their job, nothing else.

So be smart and plan ahead. Put them in their seperate feature where you put all the code that builds and adds a certain webpart to a certain page. Probably you already have this code somewhere. If that’s the case your missing step is to add some code that removes the webpart again on feature deactivation and you’re there. Since WebParts almost never contain any content themselves, but merely provide functionality or content to the viewer, they can be discarded and rebuild without any problems. This basically does away with all the nasty consequences of state you have to deal with like with Content Types.

When a change needs to happen to this WebPart, you update the code that constructs it and you re-activate the feature. The WebPart is now deployed with the latest changes, no matter what state it was in before. Views are another example. They are essentially stateless, so do yourself a favor and treat them that way :).

A good reason that you want to extend this to a whole page for WebParts is that you may have more of them on one page and you want to connect them, so it’s easier to have them recycled all at once. Hell, if you want to go crazy you can maybe even include the list view they use.

Stateless changes

The provisional change stands very much in contrast to the functional change. Arguably the easiest one to deal with. What I mean with a functional change is merely any change in functionality that is stateless, like a function. You have code that calculates some number from several other input fields, but now needs to change the format it presents it in ? Functional change. You adapt the code, you update the deployed solution and the change is immediately visible afterwards. It is picked up automatically by SharePoint. No overhead.

Some CAML things can also be like this. For example, adding custom actions to a list item’s menu through CAML. Let’s say you have a typo in the title of the Modal Dialog that’s shown when invoking the custom action. No problem, update the CAML (you declare the title to be used in the JavaScript function) and update the deployed solution and done.

SharePoint Application Life cycle

Your SharePoint artifacts and, generally, your environment, will go from one state to another. That’s why you have the versioning in your Feature Upgrades, to determine what state your SharePoint Feature is in and how to go from that state to the latest. This is the crucial part. Sometimes it matters what state you were in before you go to the latest state.

Sometimes, it matters because of a technical reason. Sometimes it’s just because you feel it’s silly to make a field required when it was already required. But all that matters is that each release with changes to the state of SharePoint artifacts will take the SharePoint environment into a new state you have to account for.

How many states will be live at the same time ? Ideally, only 2. The latest, which is what your developers are working on. And the last stable release, which is what is deployed in production. With each release that changes the state of your SharePoint artifacts, you will have code that moves your then stable release into the latest release.

Implementing changes

Now that we have established what impact each kind of change has on your environment, let’s look at the options available to you for implementing them.

  • CAML
    • Updating an existing artifact (limited)
  • Code
    • Updating an existing artifact (full)

This doesn’t really tell us anything new. CAML is cool for the first time, and than you’ll wonder why you didn’t use a code approach in the first place :).

This list doesn’t even sum up all the options accurately. We left out PowerShell!

So essentially, you’ll be using any of these approaches:

  • CAML
    • Feature Upgrade
  • Code
    • Add a Feature
    • Feature Upgrade
  • PowerShell

Notice you can just add a feature that will run code that makes changes to existing artifacts. I wouldn’t advise this unless its for the sake of reducing state, like with the WebParts I mentioned, although, you’d want to have it set up like that from the start.

Also notice that these approaches only apply to Provisional Changes, functional ones only need a mere update to the existing code base. Unless you’re dealing with the inner workings of a custom WebPart or Timer Job which you’ll need to somehow redeploy.

What to choose?

  • Adding a new module does get picked up by SharePoint but most likely not provisioned, which means it’s not a complete solution in itself. You’ll still need some Feature Upgrade CAML on an existing feature, or an entirely new feature to deploy the module. For me, this depends on whether the artifact is related to anything existing or whether it deserves a completely new feature.
  • Use feature upgrade CAML for the simpler things
    • Adding fields
    • Add a file
    • Maybe even to add fields to a content type or remove some
  • Use feature upgrade Code for the complexer stuff
    • Reconfiguring certain field and content type properties or the order of a field in a content type, etc.
  • Use PowerShell when it seems more convenient
    • Uploading files
    • Configuration changes to the highest level site collection
    • Making changes to very specific artifacts
    • To make unforeseen changes, stuff that went wrong and needs to be fixed in this specific instance but will likely not happen again in other deploys
    • But do keep in mind that it will now be part of your Upgrade Process.
    • IMPORTANT: To automatically perform the solution updates and the feature upgrades. (automate as much as you can)

In the case of PowerShell, you can merely ask yourself the question, Is this something I might otherwise do manually ? If no, definitely do it in Feature Upgrade code instead of PowerShell, otherwise you have a good point for doing it in PowerShell.

Feature Upgrades

Why do you need Feature Upgrades ? Well, not all situations allow you to just merely switch off an existing feature and turn it back on again. Think of all the provisioned artifacts that already exist. You can’t just recreate all of them. Features that deploy Lists for example, you certainly don’t want to lose the content of your list. Ditto with Content Types. If some particular code created an artifact that you cannot throw away and recreate, you’ll have to use a Feature Upgrade. This is true for almost all Provisioning Changes.

So that’s why you have Feature Upgrades. These will allow you to upgrade existing features, and the artifacts it provisioned.

Only for the very specific changes, you’ll consider using PowerShell instead.

Feature Upgrades have the benefit of having the access to your existing code base, so you can reuse functions. On the other hand, you have the limitation and the associated risk of only being able to go through a particular upgrade action only once. Upgrading a feature decisively makes that feature the latest version, there’s no upgrading twice if you made a mistake where you would have a need for the feature upgrade to run again. This is in stark contrast to using a Feature or PowerShell.

Versions

Another possible pitfall of Feature Upgrades are its versions.

You really have to be sure on how you want to be using the BeginVersion and EndVersion attributes of a particular Upgrade Action for a Feature.

Let’s summarize how it works:

  • A Feature has a version
  • A Feature can have multiple Version Ranges with multiple UpgradeActions associated to each Version Range element.
  • Each Version Range has a BeginVersion and an EndVersion indicating on which Feature Version it will run the Upgrade Actions.
  • BeginVerion is inclusive and EndVersion is Exclusive
    • If a Feature is deployed at version 1, and the latest version is 2
    • and it has a VersionRange element with BeginVersion 1.0.0.0 and an EndVersion of 1.8.0.0
    • it will be upgraded because the VersionRange matches any Feature with version 1.0.0.0 up and including 1.7.9.9
  • If a Feature is detected to have a newer version than the deployed Feature, it will go through the Upgrade Process only once, leaving the deployed Feature in the latest version

This last point has an important implication on the use of your BeginVersion and EndVersion attributes and which WSP you deploy containing which version of that particular Feature.

Imagine you have an UpgradeAction called AddFieldXToContentTypeY with BeginVersion 0.0.0.0 and EndVersion 1.0.0.0 and the Feature is now versioned at 1.0.0.0.

Your deployed Feature is at version 0.0.0.0 (which is the default when it isn’t specified). You deploy a WSP containing that same Feature with version 1.0.0.0. You perform a Feature Upgrade and your feature is now at Version 1.0.0.0 and the UpgradeAction was applied, the field is added to the content type.

In your next change, you add an UpgradeAction called MoveFieldXToPosition1InContentTypeY with BeginVersion 1.0.0.0 and EndVersion 2.0.0.0 and the Feature is now versioned at 2.0.0.0.

You do the same as before, deploying the WSP containing the Feature at version 2.0.0.0 and you perform the Feature Upgrade. Cool, your deployed Feature now got upgrade to version 2.0.0.0 and the field was moved to its new position in the order of fields in the content type.

All well and good but what happens when you do a clean install of your WSP containing the original Feature definiton with version 0.0.0.0 and you deploy the WSP containing the Feature of version 2.0.0.0 and you upgrade this feature ? Ha! You’ll see that field X got added to the content type but it won’t be in position 1 of the ordering of the fields of the content type. Why’s that ? Because the deployed Feature of version 0.0.0.0 did not match the VersionRange of the second UpgradeAction of 1.0.0.0 to 2.0.0.0.

This is obvious now, but when you’re deciding on the Version Ranges it may seem more obvious of matching BeginVersion to previous upgrades EndVersions, no ? After all, you’re going from version 0 to version 1 to version 2, right ? Well, this is wholy up to you, do you want to be forced to go through each version deploy or not ? It may seem like an easy choice in this case, only one feature to upgrade and no dependencies, but when you have Feature Upgrades spread out over 3 or more Features that you need to upgrade in a certain order, maybe even execute a few PowerShell scripts in between, you won’t be so keen in allowing your environment to go from version 0 to version 2 in one go. What’s more, after you’ve done the Feature Upgrade, that Feature is version 2.0.0.0, no matter if it performed that second UpgradeAction or not. You won’t even be able to execute it without doing a redeploying where you manipulate the versions again.

I would advise to make BeginVersion 0.0.0.0 as a default and only in very rare cases change it to a specific version. Likewise, make the EndVersion match a generic release version, don’t make it too specific.

Deploying these changes

Naturally, all this talk about changes to state and it carring your SharePoint application from one version into another will have some implications for the deployment aspect of it all right? Right.

Initial Deploy & Upgrade Scripts

Remember when we talked about the state of a SharePoint environment earlier ? Ideally you’ll only want 2 states, the deployed state and the latest developed state.

This would indicate you don’t want to know about any past versions that might have existed, you just want to be able to go immediately to your latest deployed state and start working on upgrading it to the latest developed state.

You must realise that this is a very “ideal-world” way of thinking. This can work perfectly if you only have your WSP to think about, and the Features inside it. When you have to calculate in the PowerShell scripts that are needed to upgrade to go from one version to another, you’re in a whole different situation.

PowerShell, our savior

When I say PowerShell scripts, I don’t mean the script you use to deploy the WSP’s with Update-SPSolution and trigger the Feature Upgrades / Install and activate any new features. That’s perfectly fine. What I’m talking about is the scripts you use to make these very granular changes, like setting Web Properties, Web Application Properties, re-activating Web Application Features (when dealing with Timer Job code upgrades this is necessary), perhaps adding a search center and configuring search in your application. These kind of changes are more difficult to think of as mere additions to your “clean install” and less so as an upgrade from one state to another.

These scripts have a “downside” (if you’re so inclined to call it that) of being outside of your SharePoint Solution. They are not integrated (that might be cool project though!) in your deploy. Like I said before, you might use PowerShell to deploy your solutions but I’m talking about the provisiong scripts (those making changes to the state of your environment). These might have a specific order in how they need to run and are probably very “version specific”. They may depend on a field being there (that got deployed in release X) and stuff like that. They demand their moment in the spotlight in between the states of your releases.

That’s why it’s difficult to justify deploying latest release code straight up as a valid “Release State” (I’m looking at you developers). There’s really no way around going through all the proper deployment steps from start to finish. Except restores from production.

This really comes down to how well you trust your production environment. Manual changes to this environment have a big impact on the next release, as in, those changes are gone. And maybe some stuff when wrong during the deploy, and corrupted some data… (it happens, don’t laugh).

There’s nothing that trumps doing the real thing, so developing on a production restore (on your local machine) really is the only way to make sure you are going through the same scenario as you would on production.

Example:

Scenario: you have a version 1 SharePoint environment deployed. You make a large impact change (for exmaple, any of the things I mentioned in the previous paragraph) to get it to version 2. You want to stay in the “Ideal-World” where you only have 2 states to worry about (don’t forget about version 3 that’s on it’s way, making it a total of 3 possible states now). What you’ll have to do is the following:

  • Make an upgrade scenario, where you call any of these new scripts that perform the big impact changes
  • Integrate this scripts back into the “clean install” scenario (let’s call this state version 0)

Now you can safely go from version 0 (nothing) to version 2 or from version 1 to version 2.

When deploying the changes of version 3 you have to worry about upgrading from both a version 1 or version 2 environment (don’t think of this as impossible, there’s multiple reason you might want or need to have an older version environment around) as well as a clean install from version 0.

  • You’ll have to add another upgrade scenario script
  • Update the clean install script with these new changes

Cool, you’ve managed to stay in your “ideal-world”. But can you really trust this “clean-install” scenario compared to the state your production environment is in ? After all, this one went from version 0 to version 1, upgraded to version 2 and later to version 3. Whereas your clean install will go from version 0 to version 3 straight up. But what about the WSP’s!? Well, in this case (large impact changes mandating the need of special PowerShell scripts) you’ll have to keep each WSP of each version around and perform an Update-SPSolution between each version change.

Your clean install script will look something like this over the course of these upgrades:

First deploy

Install.ps1

    // Deploy version 1
    Update-SPSolution -Identity "R1\MySolution.wsp" -LiteralPath (gci "R1\MySolution.wsp")
    .\CustomConfigurationForVersion1.ps1

Upgrade to version 2

Install.ps1

    // Deploy version 1
    Update-SPSolution -Identity "R1\MySolution.wsp" -LiteralPath (gci "R1\MySolution.wsp")
    .\CustomConfigurationForVersion1.ps1

UpgradeToVersion2.ps1

    Update-SPSolution -Identity "R2\MySolution.wsp" -LiteralPath (gci "R2\MySolution.wsp")
    .\FeatureUpgradesForVersion2.ps1
    .\CustomConfigurationForVersion2.ps1

Upgrade to version 3

Install.ps1

    // Deploy version 1
    Update-SPSolution -Identity "R1\MySolution.wsp" -LiteralPath (gci "R1\MySolution.wsp")
    .\CustomConfigurationForVersion1.ps1

    // Upgrade to version 2
    .\UpgradeToVersion2.ps1

UpgradeToVersion2.ps1

    Update-SPSolution -Identity "R2\MySolution.wsp" -LiteralPath (gci "R2\MySolution.wsp")
    .\FeatureUpgradesForVersion2.ps1
    .\CustomConfigurationForVersion2.ps1

UpgradeToVersion3.ps1

    Update-SPSolution -Identity "R3\MySolution.wsp" -LiteralPath (gci "R3\MySolution.wsp")
    .\FeatureUpgradesForVersion3.ps1
    .\CustomConfigurationForVersion3.ps1

Like I said, you have to keep around the WSP’s of previous version because of the Feature Upgrade where you might need to do some PowerShell stuff in between, or need to follow a specific order of upgrading (Feature X to version 2, Feature Y to version 2 before upgrading Feature X to version 3, I dunno man, this stuff happens more quickly than you’d think).

Keeping the WSP’s around and upgrading from one version to another following all the in between steps gives you the exact same upgrade process as your deployed production environment, which is always what you should aim to be developing on. Not some shortcut deployed environment that doesn’t have the same history as your production environment. You’ll be sorry when you upgrade your production environment and discover it suddenly behaves differently than your development environment.

Feature Instances

Another important fact to keep in mind is that Features have instances. They come forth based on their scope and if you have a Site Collection with some Site Features and you have 10 SubWebs with some Web Features you’ll have 1 instance of each Site Feature (on the Site Collection) and 10 instances of each Web Feature (on all the SubWebs). This matters for your Feature Upgrade code as well, this is basically the reach they have controls over. If you have a Web Feature that deploys a List in a Web, and you have 10 Webs with this Feature activated, you’ll have 10 instances of the Feature that deployed this List to each Web, and you’ll have 10 Feature Upgrades to execute, albeit with the identical Feature Upgrade code.

I can highly recommend using the Feature Upgrade Kit from Chris O’Brien, but if you need a tighter control over the order of upgrading the Features, you’ll want to script it yourself. This is pretty easy as you can just query the Site object for any features that need to be upgraded (it will return features from subwebs as well).

$Site = Get-SPSite("http://mysite")
$FeatureScope = [Microsoft.SharePoint.SPFeatureScope]::Site
$OnlyRequiringFeatureUpgrade = $true
$FeaturesRequiringUpgrade = $Site.QueryFeatures($FeatureScope ,$OnlyRequiringUpgrade)

Here it becomes important to remember you have an instance of a Feature for each Site or Web it is deployed on:

$ForceUpgrade = $false
$FeaturesRequiringUpgrade | % { 
    $_.Upgrade($ForceUpgrade)
}

I prefer to be absolutely certain and create a list of Feature names I want to see upgraded, query the Site for Features requiring an upgrade, filter out the ones I want to upgrade, remember each Feature’s current version, perform the upgrade and compare the versions, if they’re still the same I will give feedback about this. Ditto in case the feature I wanted to upgrade cannot actually be upgraded (because it already is at it’s latest version, or so SharePoint thinks).

This approach is particulary useful when you need tight control over the order of Feature Upgrading. This might matter when Features of different scope need to be upgraded.

Feature Activate & Feature Upgrade Events

If you’re required to make Provisioning Changes where you’ll have to implement them through the use of Feature Upgrades to upgrade any existing artifacts, you’ll have to remember to not forget about how the new artifacts have to be created.

This is why it’s advised to seperated the code making the specific change out of the Feature Upgrade event and also put a call to this code into the Feature Activate event. This way, it’s just as if the existing Feature Instances ran the same code as the latest Feature Activate events are and they stay in perfect sync, which is the whole point after all.

This brings us to the next point I wanted to touch on.

Rise, my children (aka Artifacts – New vs. Existing)

After you’ve successfully deployed your changes to the SharePoint environment, there’ll be a distinct difference between any SharePoint artifacts to keep in mind. Those that already existed, and those that have been created after the deploy.

This may not seem that important of a difference, but when you’ve made mistakes and are seeing strange things happen in your SharePoint environment, be sure to check which of the artifacts are affected, are the old artifacts experiencing issues or is it happening with the newly created ones ? This way, at least, you’ll know where to look. Clear indication of inconsistencies in code triggered by the Feature Upgrades and the Feature Activate events.

Summary

To sum it all up, if you stick to the following, you may just be fine :) :

  • Be aware of the nature of a change, provisional or functional ?
  • Make clear distinction between the install/upgrade script of each version & the WSP’s / other files needed for each version. Keep integrating each new version back into the “clean install” script.
  • Be wary of any differences between the code that execute for creating new artifacts and the code that executes on existing artifacts to bring them up to speed.
  • Give preference to code contained in the WSP over added PowerShell scripts, and be aware of the nature of Feature Upgrades versus adding a new Feature (run just once vs being able to run again).
  • Make BeginVersion in FeatureUpgrades be 0.0.0.0 as a default and only in very rare cases change it to a specific version. Make the EndVersion match the generic release version, don’t make it too specific.
  • Don’t unnecessarily bother with state
    • Timer Jobs, Web Parts and Views can be treated as stateless if you seperate them from any other code in their seperate feature so you can more easily reinstantiating them.

Miscellaneous

  • I prefer Update-SPSolution over the complete Retract/Remove/Add/Deploy cycle. Features get reactivated in the latter and that scares me. I sure know my VS Retract Features breaks the whole environment…
  • When using just Update-SPSolution to upgrade the WSP, remember to run Install-SPFeature on any newly created Features in your solution. It merely “installs” the Feature in the farm so you can activate it where you like. This is something that doesn’t automatically happen when using Update-SPSolution opposed to the full Retract/Remove/Add/Deploy cycle.
  • When you have issues with locked .dll’s because they’re still loaded in your PowerShell Shell (so annoying) be aware that you can launch new Shells inside your existing Shell. These Shells will be starting afresh and won’t have old dll’s loaded. Be aware that you’ll have to close this Shell as well, using either exit, or the following approach, which will close the shell right after the last line in the CodeBlock is executed.

      PowerShell -Command {
          Some powershell here
      }
    
  • Script everyting :)

SharePoint 2010 - Export Built-In ListViewWebPart

In SharePoint 2010 you can export WebParts by editing the page they are on and in their Edit Menu, selecting “Export…”. However, this does not work for the built-in ListViewWebparts for the lists.

But, there is way of enabling it. Every WebPart has a property called ExportMode that determines if a WebPart is exportable or not. It’s value is an Enum Type of [System.Web.UI.WebControls.WebParts.WebPartExportMode] and can have a couple of values like ‘None’ and ‘All’.

Setting it on the ListViewWebPart with PowerShell is rather easy:

$SiteUrl = "http://mysite"
$ListName = "My List"

$Web = Get-SPWeb $SiteUrl

$Page = $Web.GetFile("/Lists/" + $ListName + "/LastModified.aspx")
$WPM = $Page.GetLimitedWebPartManager([System.Web.UI.WebControls.WebParts.PersonalizationScope]::Shared)

$WP = $WPM.WebParts[0]
$WP.ExportMode = [System.Web.UI.WebControls.WebParts.WebPartExportMode]::All

$WPM.SaveChanges($WP)

$Page.Update()

Enjoy!

SharePoint 2010 - Calendar View on XSLTListViewWebPart

Recently I tried changing the View of an XSLTListViewWebpart to a calendar view from code but it didnt work.

I tried it the same way as I did it for every other webpart, setting the ViewGuid property, but it kept using a normal list view of its events, which wasn’t even its default (which was the calendar). When I investigated the issue further I noticed something strange.

When setting the webpart’s View to the Calendar view from the GUI, it worked, but it also changed the webpart from an XsltListViewWebpart to an ordinary ListViewWebpart.

SPList list = ...
SPView view = ...

ListViewWebPart wp = new ListViewWebPart();
...
wp.ViewGuid = view.ID.ToString("B").ToUpper();

When I created a ListViewWebpart instance in code and set its View to the calendar view it now worked!