Gartner has recently announced that an enhanced version of the Magic Quadrant will be released on 29 July. So what’s driving this change, what is it, and what does it mean to you as an AR professional?
Here comes MQ 2.0
The Gartner MQ has not really changed its physical appearance since its original introduction. The famous two-by-two matrix and dots started life on paper and were effectively shifted onto the web with no real change. Over the years, the MQ has been industrialized at the back end with a structured measurement methodology. The front end moved from a static, locked-in-PDF view to a mildly interactive view several years ago, where users could mouse-over a position to read vendor specific strengths and challenges. The degree of interactivity however is about to increase dramatically.
So what’s driving this minor revolution?
According to Gartner, it’s customer demand.
Users and suppliers alike have asked for greater customizability of the results, to allow adaptation to specific needs, but without any change to the core methodology. At the same time, of course, Forrester has long offered a spreadsheet version of its Forrester Wave with criteria, sub-criteria, weightings, and scores that are all visible to users and with the option to adjust weightings to create custom views. Ovum also offers an interactive capability within its Decision Matrix research, allowing for user customization of the analysis. Gartner is either playing catch-up, or surging forward to leapfrog its competitors’ capabilities. So which is it?
It’s about interactivity and customization
The newly interactive MQ offers five new capabilities that take advantage of the electronic interface. They are:
- an MQ home page (providing easy access to all MQ research, with browsing by vendor, topic, or industry sector);
- the ability to highlight selected vendors in an MQ (and have others retreat into a “grayed” background);
- zooming, to allow users to focus on a single quadrant (dramatically improving readability where dots are closely packed);
- historical comparisons (showing up to three MQs side by side – the latest and two previous versions); and, most significantly,
- criteria weighting customizability (allowing users to create their own bespoke analysis).
For AR, the new home page provides a useful and easy access point for this analysis, but it has no real impact. The important new elements are the historical comparison and weightings customization functions.
History view: Users can see the latest MQ and either one or two earlier MQs, shown side by side. The display does not specifically indicate movement between years and historical views cannot be layered to allow users to see movement clearly. They must simply eyeball the three grids and draw their own conclusions. This, of course, implies that year-on-year movement is necessarily relevant – a contentious topic among analysts, especially in cases where changes in market structure, criteria for analysis, or research personnel can lead to markedly different results. Recognizing this issue, at least in part, Gartner plans to highlight any major changes that have occurred in the basis of measurement between the current MQ and its immediate predecessor. Changes between the two earlier versions are not highlighted, and users will only be made aware of these if they delve into the research and read the notes in full. The obvious risk is that users may not bother to seek reasons for changes in the structure, but just assume the apparent movement is based on comparing like with like.
Weighting Customization: The customization function uses slider bars to change overall weightings for the major criteria assessed under “Ability to execute” and “Vision” – elements like product and service, viability, market understanding, marketing execution, and operations. Dots on the chart then move in response to these changes. No information is provided about the sub-criteria for any category (beyond the standard descriptions provided as text in each MQ document). Nor is there any direct visibility of actual scores – a valued feature of the Forrester Wave. Gartner defends its position, in not sharing these details, by referring to the complexity of the model and the difficulty of creating true standardization across MQs and across MQ-writing analysts. This is a complex issue. But by not addressing it, even in part, Gartner upholds the mystery of the MQ’s underlying structure. As we have all seen, this can sometimes be used to blur or explain away oddities or variations in assessments that vendors may find puzzling. The ultimate figleaf for this potentially self-serving opacity remains the disclaimer element on every MQ that reads: “This publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact.”
In physical terms, as soon as a user changes the weighting of a criterion from the analyst’s selected default, that particular criterion is highlighted in orange (versus the blue that is used for all the standard elements). The quadrant itself is shown with orange dots, to signify a customized variation, and the MQ also includes a watermark as a reminder that this is a customized view. Customized quadrants may be printed or exported for internal use, but the rules state that they cannot be used for marketing (e.g. in reprints) in any form other than the default standard. Gartner has watermarked and color coded printed output to avoid any confusion between the “customized” and the “official Gartner” view.
Lastly, the online nature of the new functionality makes users’ interactions with an MQ a new data point that Gartner research analysts can use in developing their insights and enhancing future iterations of an MQ. Analysts will be able to see what customized views users save, at an aggregated level, and hence how often others weight the assessment differently from their own view. This should also provide useful insight for suppliers keen to understand the true evaluation focus that drives their prospects’ decisions.
Lead us not into temptation
The new format changes nothing in terms of the information that needs to be communicated to an analyst in preparation for an MQ, and the black box remains largely impenetrable. Clients, however, will now have more information, via the historical and custom views, to help achieve a better focus in relation to both MQ cycle responses and general planning of year-round analyst communications.
The customized views, no doubt, will constitute a temptation for many. Gartner will certainly be watching out for abuse of this capability and AR pros should do likewise. In particular, they will need to look out for presentations or collateral including charts derived from custom views depicting, for example, the MQ weighted as one vendor believes it should be to reflect the market’s values and priorities!
What will certainly be “street legal” will be the opportunity for AR to provide sales with new and subtle guidance. AR may, for example, suggest the sales team points out to potential buyers who are Gartner clients that they can adjust the weightings to reflect a particular concern with, say, “marketing execution” – and that if they do, lo and behold, the company’s true strengths in this area will be reflected in the customized MQ.
The historical view is also likely to create some confusion. Moving up and right always looks like progress, while down and left looks negative. But criteria often change from year to year, and MQ dot positions are all relative, anyway. A company that has performed well may still appear to be going backwards if the line-up of players in the grid changes, perhaps because of mergers and industry consolidation. Without a good deal of due diligence, movement from one year to the next is highly likely to be misinterpreted. AR is going to have an important role to play here, in ensuring that the relative positions in the published MQ are understood and the right messages are included in sales communications.
All in all, the new interactive MQ offers a much improved look-and-feel and an approach to interpretation that truly utilizes the electronic access method for the first time. As ever, though, the devil will be in the detail, in how this analysis is used and – perhaps more critically – abused.
Written by Simon Levin, IIAR Board Member