By: Simon Levin, The Skills Connection / IIAR UK Co-Chapter Lead
We saw an interesting blogging spat last week between Stanton Jones of ISG and Lydia Leong at Gartner, with the flames fanned by tweeted comments from Phil Fersht of HfS. The row was centred on some research published recently by Lydia on managed hosting providers, but its ramifications are much wider.
For those who haven’t yet followed the Twitter feeds and blog links, let me try to summarise what’s going on.
Stanton’s charge is that Magic Quadrants serve a purpose by offering insight into vendors and products, but that the high-level nature of the analysis means they are poor primary tools for making choices. He emphasises the lack of nuance possible in a written article, compared with the detailed, customised insight that can be provided via a consulting engagement. And, of course, he is right.
Lydia Leong’s argument is that written research may be Gartner’s calling card, but it’s not Gartner’s service. Her perspective – not all that different from Stanton’s, really – is that the Magic Quadrant is just one part of the selection process. After all, no company is going to make its purchase decisions solely based on reading one piece of research, is it? Clients need the generic research to be put in the context of their specific needs. That is achieved not by reading the note but by engaging the analyst in an inquiry, where the analyst can be far more specific. And, of course, Lydia is right, too.
Always the bridesmaid, never the bride
So can both Stanton and Lydia really be right? Well, yes and no. Life, as usual, is not quite as simple as each of them would like to make out.
Let’s start with Lydia’s position. Can a client get effective, customised purchasing advice through speaking to the analyst in the course of an inquiry? The answer to this varies from analyst to analyst – and that assumes, first of all, that the client company has actually paid for inquiry privileges. The reason for the variation is that few analysts have ever implemented anything in anger, and even if they have, it was usually long ago. Their knowledge is always one step removed from the front line.
Lydia refers to potentially handling a thousand inquiries a year (though few analysts ever reach that total – or, at least, not without gaming the system). Experience like this can obviously teach the analyst a lot about the buying centre’s issues and concerns. The analyst’s perspective certainly provides valid input to the buying process and contributes an aggregated view that no one working on a succession of individual projects could ever achieve. Against that, the analyst is never tied into the day-to-day grind of having to live with a purchasing decision and make the technology work.
A client that lacked experience of deploying such a solution would be left exposed to many risk factors if the analyst came to be relied on as the only source of advice. The informed view of someone who is working at the coalface would provide a more valuable perspective on the detailed pitfalls around any given product or supplier, though, again, these opinions could be biased and would necessarily be formed on a limited base of data.
In other words, both approaches are less than perfect. You pay your money, and you take your choice. Or, as many companies do when it’s a vital decision, you get your answers from both sources.
You’ve got to be in it to win it
Lastly, there’s one other point to make in support of Stanton’s argument. Because, in practice, the way advisory research is used is generally a badly flawed process.
It goes without saying that a graphic device like a Magic Quadrant or a Wave is not intended to depict right and wrong choices.
Gartner and Forrester both go out of their way to stress the point. As Gartner comments: “It is crucial to look beyond Magic Quadrant Leaders when selecting a vendor. The vendor that is perfect for your needs may be a Niche player.”
That hardly needs to be said, but it is. Yet even despite being spelled out clearly, it is often ignored.
The truth is that Niche players often fail to get the consideration they deserve because many companies create their shortlists just by picking out the Leaders.
If that’s the case, it hardly matters who the buyer turns to next for advice. If all buyers look at is a comparison of the players in the Leaders quadrant, there is no chance they will ever discover the best solution that may be hiding in the Visionaries or Niche quadrants. Worse still, the very best solution may not actually appear anywhere at all in the research note – because this solution comes from a local player, rather than a global one, or because the vendor is a newcomer and not yet large enough to meet the inclusion criteria.
So where does this leave IT suppliers? Well, it leaves them exactly where they’ve always been.
The advice is just what it always was. If you want to appear on shortlists, first get into the Magic Quadrant and then, ideally, work your way into the Leaders quadrant. Beyond that, whether the detailed advice buyers choose next is from a sourcing specialist or a research analyst is likely to be irrelevant. If you’re not in the game, you’re out of it. And if that’s the case, it’s hardly going to matter to you which type of analyst or consultant the buyer turns to for advice.
4 thoughts on “[Guest Post] Does the consulting approach beat published research?”
Simon, why do you say “yes and no” to agreeing with them? There’s no contradiction between their viewpoints as you present them, but perhaps they are talking at cross-purposes. What do you disagree with? No-one at Gartner presents the MQ as the primary tool for product selection.
For me the “no” is that neither are right in absolute terms. Is either alternative definitely correct. I dont think so. As far as whether anyone at Gartner presents the MQ as the primary tool for product selection – well there I would have to say A primary tool – yes. Do they say buy only from Leaders – absolutely not. Do we all know companies who use the dot position to drive shortlisting nevertheless? Well I can only speak for myself – but for me the answer is most definitely yes.
A quick note: You write, “Few analysts have ever implemented anything in anger, and even if they have, it was usually long ago.”
This depends on the analyst firm, really. Some analysts are pretty much entirely theoreticians, especially the ones who come right out of school into an analyst role; many of them aren’t even really technical people.
Gartner tends to hire people who have been practitioners, typically with 10+ years of experience by the time they join us. Some of the folks, like the ones who work in Gartner’s IT Professionals group (formed out of the Burton acquisition) are intensely technical and generally remain so with some degree of hands-on work.
But I will concede the “long ago” piece. Some analysts like myself try to keep their hands in technology in our spare time, but it’s very different than actually being enveloped in implementation on a day to day basis. (On the other hand, it’s not really any farther removed than most director or VP-level roles in organizations large enough that people in mid-to-senior-level IT management are no longer hands-on.)
Lydia – thanks for joining the conversation. The 10+ years is a slightly mis-leading stat as in many cases that is supplier versus user experience which again puts the analyst one step removed from the “heat of battle”.
I am very interested to know how Gartner may be instutionalizing the flow of knowledge from the Burton resources into research processes such as the MQ. That certainly sounds like an interesting way to inject some direct experience – if its practical.
You must log in to post a comment.