Gartner has recently announced that an enhanced version of the Magic Quadrant will be released on 29 July. So what’s driving this change, what is it, and what does it mean to you as an AR professional?
Here comes MQ 2.0
The Gartner MQ has not really changed its physical appearance since its original introduction. The famous two-by-two matrix and dots started life on paper and were effectively shifted onto the web with no real change. Over the years, the MQ has been industrialized at the back end with a structured measurement methodology. The front end moved from a static, locked-in-PDF view to a mildly interactive view several years ago, where users could mouse-over a position to read vendor specific strengths and challenges. The degree of interactivity however is about to increase dramatically.
So what’s driving this minor revolution?
According to Gartner, it’s customer demand.
Users and suppliers alike have asked for greater customizability of the results, to allow adaptation to specific needs, but without any change to the core methodology. At the same time, of course, Forrester has long offered a spreadsheet version of its Forrester Wave with criteria, sub-criteria, weightings, and scores that are all visible to users and with the option to adjust weightings to create custom views. Ovum also offers an interactive capability within its Decision Matrix research, allowing for user customization of the analysis. Gartner is either playing catch-up, or surging forward to leapfrog its competitors’ capabilities. So which is it?
It’s about interactivity and customization
The newly interactive MQ offers five new capabilities that take advantage of the electronic interface. They are:
- an MQ home page (providing easy access to all MQ research, with browsing by vendor, topic, or industry sector);
- the ability to highlight selected vendors in an MQ (and have others retreat into a “grayed” background);
- zooming, to allow users to focus on a single quadrant (dramatically improving readability where dots are closely packed);
- historical comparisons (showing up to three MQs side by side – the latest and two previous versions); and, most significantly,
- criteria weighting customizability (allowing users to create their own bespoke analysis).
For AR, the new home page provides a useful and easy access point for this analysis, but it has no real impact. The important new elements are the historical comparison and weightings customization functions.
History view: Users can see the latest MQ and either one or two earlier MQs, shown side by side. The display does not specifically indicate movement between years and historical views cannot be layered to allow users to see movement clearly. They must simply eyeball the three grids and draw their own conclusions. This, of course, implies that year-on-year movement is necessarily relevant – a contentious topic among analysts, especially in cases where changes in market structure, criteria for analysis, or research personnel can lead to markedly different results. Recognizing this issue, at least in part, Gartner plans to highlight any major changes that have occurred in the basis of measurement between the current MQ and its immediate predecessor. Changes between the two earlier versions are not highlighted, and users will only be made aware of these if they delve into the research and read the notes in full. The obvious risk is that users may not bother to seek reasons for changes in the structure, but just assume the apparent movement is based on comparing like with like.
Weighting Customization: The customization function uses slider bars to change overall weightings for the major criteria assessed under “Ability to execute” and “Vision” – elements like product and service, viability, market understanding, marketing execution, and operations. Dots on the chart then move in response to these changes. No information is provided about the sub-criteria for any category (beyond the standard descriptions provided as text in each MQ document). Nor is there any direct visibility of actual scores – a valued feature of the Forrester Wave. Gartner defends its position, in not sharing these details, by referring to the complexity of the model and the difficulty of creating true standardization across MQs and across MQ-writing analysts. This is a complex issue. But by not addressing it, even in part, Gartner upholds the mystery of the MQ’s underlying structure. As we have all seen, this can sometimes be used to blur or explain away oddities or variations in assessments that vendors may find puzzling. The ultimate figleaf for this potentially self-serving opacity remains the disclaimer element on every MQ that reads: “This publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact.”
In physical terms, as soon as a user changes the weighting of a criterion from the analyst’s selected default, that particular criterion is highlighted in orange (versus the blue that is used for all the standard elements). The quadrant itself is shown with orange dots, to signify a customized variation, and the MQ also includes a watermark as a reminder that this is a customized view. Customized quadrants may be printed or exported for internal use, but the rules state that they cannot be used for marketing (e.g. in reprints) in any form other than the default standard. Gartner has watermarked and color coded printed output to avoid any confusion between the “customized” and the “official Gartner” view.
Lastly, the online nature of the new functionality makes users’ interactions with an MQ a new data point that Gartner research analysts can use in developing their insights and enhancing future iterations of an MQ. Analysts will be able to see what customized views users save, at an aggregated level, and hence how often others weight the assessment differently from their own view. This should also provide useful insight for suppliers keen to understand the true evaluation focus that drives their prospects’ decisions.
Lead us not into temptation
The new format changes nothing in terms of the information that needs to be communicated to an analyst in preparation for an MQ, and the black box remains largely impenetrable. Clients, however, will now have more information, via the historical and custom views, to help achieve a better focus in relation to both MQ cycle responses and general planning of year-round analyst communications.
The customized views, no doubt, will constitute a temptation for many. Gartner will certainly be watching out for abuse of this capability and AR pros should do likewise. In particular, they will need to look out for presentations or collateral including charts derived from custom views depicting, for example, the MQ weighted as one vendor believes it should be to reflect the market’s values and priorities!
What will certainly be “street legal” will be the opportunity for AR to provide sales with new and subtle guidance. AR may, for example, suggest the sales team points out to potential buyers who are Gartner clients that they can adjust the weightings to reflect a particular concern with, say, “marketing execution” – and that if they do, lo and behold, the company’s true strengths in this area will be reflected in the customized MQ.
The historical view is also likely to create some confusion. Moving up and right always looks like progress, while down and left looks negative. But criteria often change from year to year, and MQ dot positions are all relative, anyway. A company that has performed well may still appear to be going backwards if the line-up of players in the grid changes, perhaps because of mergers and industry consolidation. Without a good deal of due diligence, movement from one year to the next is highly likely to be misinterpreted. AR is going to have an important role to play here, in ensuring that the relative positions in the published MQ are understood and the right messages are included in sales communications.
All in all, the new interactive MQ offers a much improved look-and-feel and an approach to interpretation that truly utilizes the electronic access method for the first time. As ever, though, the devil will be in the detail, in how this analysis is used and – perhaps more critically – abused.
Other posts on the Gartner Magic Quadrant
- [GUEST POST] Gartner Announces pilot to handle mergers & acquisitions updates for Magic Quadrants
- [GUEST POST] IIAR Webinar: ‘Tis the season for Gartner Methodologies
- IIAR Webinar: Gartner Research Methodologies including the Magic Quadrant
- The IIAR Tragic Quadrant for 2017
- Constellation and the curse of the (not so) magic quadrant
- Do you need to pay Gartner to be in the Magic Quadrant?
- Who’s really shaping the digital future?
- The IIAR “Tragic Quadrant”
- [GUEST POST] Analysts’ Dirty Little Secrets
- Wrap-up: Netscout vs. Gartner re. Magic Quadrant positioning
- [GUEST POST] #Visionaries, #Disruptors and Complete Lunatics
- Is there Really Magic in the MQ?
- IIAR Webinar – Gartner Magic Quadrant Enhancements 2013
- Examining The New Gartner Interactive Magic Quadrant
- [Guest Post] Why IT Vendors Should Take Industry Analysts (More) Seriously
- [GUEST POST] Big Dogs don’t yap: the secret ingredient for MQ success
- [GUEST POST] Timing is everything
- Gartner publishes MQ FAQ
- Gartner details the MQ process
- Downfall: Gartner MQ and learnings
- WORLD EXCLUSIVE: Gideon Gartner on the IIAR Blog!
- Is shooting on the referee productive?
- IIAR publishes Best Practice Paper on Managing the Gartner MQ
- MQs, accreditation and a debate on IT services – all in the same evening
- Gartner engages in debates on their blog
12 thoughts on “Examining The New Gartner Interactive Magic Quadrant”
The MQ’s biggest strength and biggest weakness is that it has been based on purely subjective rather than objective analysis as it has taken no account of customer context.
This interactivity appears on the surface to be a step towards objectivity, but I am not convinced that you can be properly objective if you constrain your analysis to a single category of solution.
To be truly objective, you need to look across categories, e.g. If you are trying to solve an information overload problem, your first question should not be which storage array should I invest in, but whether to focus on boosting storage capacity or implementing better information management software.
Still, taking customer needs and preferences into account within a category is a welcome step forward, though I can’t wait to see how vendors play the weighting manipulation game 🙂
There will always be a part of subjectivity in analysts, otherwise they’d be real bores. More seriously, bias can be insidious and induced via taxonomies (like the infamous ‘cloud market’), background (past employers, experience as end-user, etc.), and more.
On the MQ, I personally rate Gartner ethics quite higher than some other firms, especially vendor facing ones, that produce MQ-lookalike vendor evaluations.
I however agree with Simon and yourself on the black box and have made this point recently to Jenni Lehman, Gartner’s GVP research methodologies: we expect better from Gartner than leaving us with even the trace of a doubt that their Magic Quadrant could be hexed with black magic.
On the positive side, one point missed by Simon is that this customisation makes the MQ a better short-listing tool for end-users, which in turn means its impact isn’t likely to diminish any time soon. AR Pros take note.
Finally, elaborating on the detailed description by Simon, the history view is the most important IMHO. Since all ratings are relative, I predict some companies are in for a nasty surprise. Again, AR pros make sure you understand this.
I think you misunderstand. Subjective is not good or bad, it just means that the focus is on comparing the attributes of ‘things’ without worrying too much about the different purposes those things could be put – essential to come up with a generic assessment. I therefore wasn’t having a knock at MQs, which are really valuable, and I certainly wasn’t implying anything about ethics or quality. The point is that historically MQs have not attempted to take into account how customer needs, constraints and scenarios vary, so you knew what they were for (comparing apples with apples according to Gartner’s declared criteria once you had decided that an apple was what you needed). Now the criteria/weightings become variable, it changes the nature of MQs, which could introduce an element of confusion. Really depends how well it is done, but to do it well and meaningfully is really hard. Let’s see how it turns out in practice.
Isn’t that a non sequitur? I agree with both propositions, but surely the change is positive if you go along the one quadrant doesn’t fit all argument?
Sorry Ludo – our brains are working quite differently as usual 🙂
I think perhaps it might be down to differences in perspective. Remember that our core research at Freeform is pretty much totally focused on investigating the demand side of the equation – customer needs, culture, preferences, constraints, history, buying behaviour and so on, and how similar types of investment play out differently depending on the customer scenario.
We are therefore very suspicious of universal judgements on solutions and vendors because what’s good for a small plumbers’ merchant in Florence is unlikely be good for an investment bank in New York, an import/distribution company in Dubai, or a charity in Stockholm. What’s most appropriate for an organisation’s needs is also massively determined by the skills, resources and investments already in place. Two organisations might look the same, but if one has a strategic commitment to Oracle and the other is a Big Blue shop through and through then what’s right for them will often be different. Geographic location is another obvious factor. That great North American solution might be a total non-starter if you based in Italy because of lack of localisation and local support.
To me, subjective solution/supplier-centric tools like MQs that articulate generalised conclusions have therefore always looked like quite blunt instruments to support real world decision making as they take not account of context. They are clearly useful, but you need to be aware that they are comparing options based on a someone else’s criteria, not yours. They are therefore just one of many necessary inputs.
I guess when it comes right down to it, my fundamental concern is that MQs are the wrong starting point for developing more objective decision-making tools, and the danger is that the interactive versions just pay lip service to the ‘needs’ side of the equation, which could create confusion. When I look at the Gartner material published on the new interactive MQs, for example, I find it hard to map the sliders shown under the ‘Customize’ tab onto the needs-based decision criteria that are likely to matter to individual customers. Vendors, on the other other hand, can have a field day with those sliders, hence my previous comment about sitting back and watching the antics they get up to 🙂
Anyway, I have now written far more on this than I was intending when I made my original comment – you’ve drawn me in again Ludo 🙂
Bottom line for me is the changes look good, but the fundamental subjective nature of the beast appears to remain the same.
Dale, the question you ask and how it’s formulated IS inducing a subjective bias… So it’s not because you survey people that data is objective. And that’s before we discuss about the sample population.
Hah – interesting – one person’s objective view is another one’s subjective opinion. From where I am sitting (notionally shoulder to shoulder with the client), an MQ is subjective because it is based on Gartner’s view of what’s important, not the client’s. Your English is better than mine though Ludo, so perhaps I am using the wrong words. In theory, taking customer weightings on board could make MQs more objective, or perhaps ‘purposeful’ is a better term, but that only works in practice if the customisation sliders relate to things that help to define the customer requirement more precisely.
Your use of the term ‘subjective bias’ suggests you believe that every Gartner MQ is underpinned by a universal and absolute definition of what’s important, and that if a customer wants to take a different view, then that’s a bad thing. I have to say that subject to the limitations of the customisation parameters I mentioned, I am with Gartner rather than you on this one in believing that client differences should be accommodated.
On a specific point, not sure what any of this has to do with survey data and sampling – I mentioned our buyer behaviour research so people know I am not making shit up when I refer to customer requirements and ‘care abouts’ varying so widely out there, even in the context of a decision within a single MQ category.
Perhaps we need to continue this discussion over a nice bottle of wine at some point 🙂
I agree Dale, and was only making the point that truth is in the eye of the beholder. In fine, we’re manipulating intellectual constructs and subjectivity is the norm rather than being the exception. We have to accept this and analyst have to be able to defend their research, which they are brilliant at -usually.
However, some things are better as opinions and should be labelled at such. They would only gain credibility. This applies to the Gartner weightings and ratings, to some surveys and to other reports presented as ‘facts’ when they’re sometimes a ‘selection of relevant facts.’
Ludo – you nailed it with your last paragraph!
Pingback: IIAR Webinar – Gartner Magic Quadrant Enhancements 2013 | The IIAR Blog
Pingback: [GUEST POST] Gartner Updates: Notes from the AR regional briefing in the UK | The IIAR Blog
Pingback: Wrap-up: Netscout vs. Gartner re. Magic Quadrant positioning | The IIAR Blog
You must log in to post a comment.