There is an article in the Guardian today about procurement, and in particular how universities are handling litigation risks by using procurement frameworks and agreements, and moving procurement decisions more centrally. To be fair, the article is more concerned I think with buying paperclips than making enterprise IT decisions, but it did prompt me to reflect on my experiences to date with procurement frameworks for IT solutions.
One issue I’ve seen with procurement frameworks is inbuilt bias; for example, its quite common to use compliance checkmarks and feature sets for product comparisons. This works reasonably well for most kinds of procurement, but can be highly misleading when procuring software and services. For example, a typical response to this kind of framework in the IT world is to pile on features to fill out the procurement matrix, regardless of whether they are usable, useful, actually work, or even make sense.
Some years ago I was involved as a consultant to create something called the Learning Platform Framework for BECTA, and it certainly suffered from these kinds of problems. Although the framework wasn’t intended to be used as a checklist, that was certainly how it was most often used; prompting an MP to raise an Early Day Motion in the Commons criticising the LPF, particularly as it disadvantaged open source solutions such as Moodle. I don’t think the LPF did have this kind of bias in its requirements, but applying it, unmodified, as an RFP checklist did create this effect.
For example, a skewed procurement framework would be a simple way for a public procurement to sidestep policies promoting open source and open standards, and keep going back to the same big IT suppliers as before. (More commentary on this particular topic over at Computer Weekly).
I think its also fair to say that a poorly-considered procurement framework can end up creating as many litigation problems as it attempts to solve.
Its interesting that IMS CEO Rob Able’s keynote at the CETIS conference last year also made this point about “checkmark interoperability” with this quote from Charles F. Leonhardt, Principal Technologist at Georgetown University:
“Vendors know the difference between clients’ asking for standards “compatibility” in order to check off a box on an RFP that is why we must include true, tested, guaranteed interoperability as a priority in our purchasing decisions, and why we must pressure our current support vendors to provide it as a condition of their continuing good business relations with us.”
Of course procurement frameworks don’t just consider standards compliance but also feature requirements, however the same argument I think often applies. There is a difference between, for example, supporting cloud deployment in theory, and having a system that will actually work when you try to do it. So a procurement framework without a rich evaluation process can simply end up benefitting the supplier willing to stretch the truth the furthest.
(I hated it when salespeople at the company I used to work for would pimp features we’d only added to get into the RFP, and which we all knew didn’t really work, rather than emphasise the features we’d put lots of effort into making work really well because customers actually used them. I started to suspect they couldn’t actually tell the difference)
So, given all of that, what can be done in practice?
At the University of Bolton a few years back we were faced with the situation of our WebCT VLE coming to its end-of-life, and had to decide whether to purchase Blackboard, or to go with the open source Moodle platform.
The approach we used was to conduct structured user evaluation. Rather than compare feature lists between products, we identified the set of most common tasks performed by users of the current system, and had a number of staff from across different departments perform them using both the systems under evaluation. We could then compare performance and user feedback to inform the purchasing decision.
To support the users, we had some staff from Blackboard along, and we also hired a Moodle consultant for a day (he never had to actually help staff figure out how to do any of the tasks, which was instructive in itself).
This meant there was no inbuilt bias towards having lots of features that may never have been used in practice, although it did mean we had a bias against speculative use; this would have been a problem if it was a completely new system for a new job, but seems a reasonable tradeoff for procuring a more-or-less drop-in replacement for an existing one.
(On the other hand, looking back at the LPF, I’m amazed at how many things were mandated in it that are now largely obsolete, and that was only written in 2006.)
Of course it wasn’t just down to user evaluation, as we also had to look at the options for services, hosting and training and integration with our student records system, but these were aspects that could be evaluated by central IT services staff directly.
Overall, there were few significant differences in user experience and performance of tasks between systems, so the decision was made principally on these other factors (which is where Moodle won out).
Interestingly enough, if we were to conduct this process again, we would now have the option of inviting Blackboard to tender for Moodle services and hosting, which perhaps underscores the point. In the current environment, the product decision is just one of several linked considerations, and not even the most important one. Many suppliers are now happy to provide a range of products including open-source offerings, instead promoting their integration, hosting and support services – or brokering and bundling services offered by third parties.
So, if I were to do it all again..? Well I think I’d steer well clear of the compliance/requirements checklist model we had for the LPF, and focus more on an open evaluation process – this is where I think the Guardian article gets it right, about tracking communications and interactions so that suppliers have confidence there isn’t some sort of backroom deal going on they aren’t party to. I think the user evaluation approach we used at Bolton was a good example, but only fits some types of procurement.
For requirements I’d want to make sure there was sufficient explanation of the background and context – so that suppliers could answer to the business need rather than just tick a feature or comply with a standard. And for standards I’d make sure there was an open process to ensure real working interoperability in the context of where the systems were to be used rather than just compliance assertions and checkmarks.
However, while I agree with the Guardian article that procurement frameworks are certainly going to be even more prevalent for general procurement in Universities, I think in the area of IT solutions and services there are no easy answers, and I can’t see an LPF-type exercise being repeated any time soon.