[GUEST POST] Big Dogs don’t yap: the secret ingredient for MQ success

Blog courtesy of: Simon Levin (IIAR Board Member)

What is it that makes the difference when it comes to making the step up into the Leaders section of Gartner’s Magic Quadrant? Ever wondered what companies who gain recognition as Leaders have in common? Having seen four of our MQ Tune-Up clients gain Leaders status for the first time last quarter, I thought it might be interesting to go looking for some common themes or attributes.

And as it turned out, the exercise was well worth the effort, because it highlighted one key factor I’d never consciously identified before.

We’re calling it the Big Dog syndrome, and it’s all about looking the part, acting like a Leader right from the start, and, above all, believing that that top right quadrant is your rightful home.

There’s more about this idea on The Skills Connection’s blog but the essence of it is blindingly simple. For a company to be perceived as a Leader, it has to have a leaderly air about it. It has to radiate conviction, as well as competence. It needs to put its case across well, but without the yapping, snapping desperation that marks out those that try too hard.

In other words, alongside great products and strong business fundamentals, the right spokespeople, and a real commitment to the Gartner assessment process, the soon-to-be-Leaders just had that Big Dog style and self-belief.

It was almost as if the assessment process was just there to confirm what everybody, inside and outside these companies, already knew. They were ripe and ready for their new status. It’s not that they were being appointed or anointed as Leaders; they were Leaders by right, and now was the hour.

You know what they say about ducks. I think it’s the same with Big Dogs and Leaders. If you look like a Leader, swim like a Leader and quack like a Leader, then you stand a much greater chance of being assessed as a Leader.

The key question, of course, is how we, as analyst relations professionals, can help our own companies or clients with this.

That’s a big ask, as it’s as much to do with coaching the company to bring out what’s already inside as it is about getting the style and content of presentations right. But if Big Dog attitudes and behavior really are important in breaking into the g quadrant, this is an area I think we need to be taking very seriously.

Do you agree? Or have I been getting too close to the glue pot again?

Click here to see the complete blog posting

14 thoughts on “[GUEST POST] Big Dogs don’t yap: the secret ingredient for MQ success”

  1. The inference here is that MQ position is significantly influenced by presentation as well as genuine substance and value. Doesn’t effective analysis stem from focusing on the book rather than the cover? Unless I am misunderstanding something, this post appears to raise an interesting question about the objectivity of the MQ assessment process.

  2. Interesting hypothesis Simon, however we do granular due diligence inclusive of software demos, speaking with customers and end-user surveys. I’m with Dale on this one. We like attractive covers on books as much as anyone, but since our Galaxy evaluation chart is generated by a spreadsheet with weighted modeling, our scores are comprised of “genuine substance and value” to the end-user customer/client.

    1. Leslie, its great to hear that Hypatia have a strong methodology. Would love to discover more. One question is how you get the Galaxy spreadsheet completed. Do the vendors provide the data for this? if so are all questions quantitative or our some qualitative implying quality of response has an impact? also how do you deal with variability of quality in demos and briefings? For example one analyst told me recently how in one formal assessment process one vendor had a deck and demo that addressed exactly what they had requested and were seeking – another showed up at the meeting asking what the analyst wanted to know!! How do you prevent this huge divergence in engagement strategy from affecting the data you compile to assess .. and your resulting assessment? Not suggesting this is a Hypatia issue, or indeed a Gartner (or any other firm) issue specifically — I think this is a tough issue for everyone. As hard as people try to get fair apples to apples comparison the fact is that everyone is dependent on the quality of the input to be able to create the right output.

      Lastly the question of style – does it have any relevance? Again I would say that it has for all human beings. Its not a conscious factor but its the reason why more people prefer the taste of Pepsi when blindfolded but buy Coke. Its why people choose to buy BMW and Mercedes when the “feature set” is no different to many others and the price higher. At some point my style of engagement will influence your thinking at one level or another – you are human.

  3. Pingback: Big Dogs don’t yap: the secret ingredient for MQ success? « Hypatia Research Group Blog

  4. Dale – a fine question. Is a good assessment all about what’s in the book alone or does the cover matter.. and if so, how much? In a perfect world there is no doubt that you would want every analyst to have the time and ability to assess a company purely on merit. They should be able to look past poor presentation or lack of detail in assessment responses, they should dig deep themselves and apply insight and judgement when companies choose to play their cards right up against their chest, they should look past ineffective demos, poor company representatives who can only answer “what” but not “why” questions, see beyond fuzzy vision statements. But we find in reality most analysts just dont have the time, or often the inclination. Working with a myriad of clients, many of whom have had very solid business stories to tell, it is clear that those who can tell their story most effectively and clearly, who can demonstrate evidence of the success of their strategy and plans, who can explain and prove the efficacy of their plans and reasoning behind, those who look like the real thing — these are the companies that win in the assessment stakes. That is book alone is not enough — you need to know how to structure it, package it and put on the right cover.

    1. Agreed with both Dale and Simon here. I’ve seen analysts who don’t even bother doing a scan of the market to see who they include in evaluations, so by actually just getting in front of them an AR pro can get a few quick wins. Of course it shouldn’t be like that and analysts should do their homework. Note that the picture is even worse for analysts who don’t do “landmark reports”, as then the issue is that of selective coverage -i.e. the analyst will only cover whom pays them.

      Those are some of the reasons why having an AR function at a vendor is a must have.

      1. Ludo, this statement is complete and utter rubbish. Do you have any idea how many small vendors with whom I, alone, take briefings? And fewer than 1 percent of these ever trun into paying clients. You have to retract this statement.

        1. Tony, I did not cite names. And I shall add that I quite agree with Dale. There’s a lot to say for what I call the “old fashion analysts”, you know the ones: those with a proper education (the humanists out there, not necessarily acquired at universities) and a long experience actually practising “in the field”. The ones who actually analyse.

          I also agree that some vendor behave badly (and I have a few examples in mind, probably the worst I’ve seen was the Get The Facts campaign organised against Linux) but I maintain that I’ve seen many analysts not scanning the market to make sure they lists all players or that some others may cover selectively. Bandwidth and un-responsiveness from vendors are definitely not helping.

  5. Leslie/Simon – I guess you could argue that an ability to present your strategy, solution and GTM well is an integral part of ‘ability to execute’.

    However, something that I have come across a lot with vendors over the years (and I think it is getting worse, not better) is product marketing and other official spokespeople being muddled, vague or ignorant of important specifics, while those working in the field on a daily basis are absolutely crystal clear on what their solution does and why (at least once the solution has been in the field for a while).

    We obviously do a lot of end user research, so get impressions of solutions first hand from customers, but I also find it useful to speak with project managers and architects from either the vendor’s professional services group and/or key consulting/integration partners, and this usually occurs independently of AR as we all use our network of contacts.

    I think AR people and product managers often overlook the fact that formal briefings are just one input for many of us, and that better analysts always seek to corroborate stories, especially those told by young product managers with little real-world experience or execs who know that whatever they say on the record can affect the company’s share price.

    My own view is that the Gartner MQ is an important tool for many and it makes sense for vendors to ensure they are accurately represented, acknowledging that no analyst or process is perfect, so the more you can make things easier for the analyst to assimilate key information, the better (which is where I think Simon’s services come in).

    However, I would like to think that the Gartner MQ process is generally immune to posturing and spin.

  6. Ludo

    Re your comment “Note that the picture is even worse for analysts who don’t do “landmark reports”, as then the issue is that of selective coverage -i.e. the analyst will only cover whom pays them. ”

    I think you highlight a real issue with bandwidth, but I only know of one firm that has linked coverage to commercial engagement, so I think you are a bit out of order with that sweeping generalisation.

    It is hard prioritising coverage with a small team, but most of the guys I know running smaller firms try their best to do it objectively. We, for example, focus on a relatively small number of large and influential vendors who collectively account for the bulk of the activity in the IT space, then try to identify interesting niche players that help us to understand the way in which the bleeding edge is developing.

    The fact that most tier 1 vendors are clients of Freeform Dynamics is not the reason we cover them. Our only problem historically has been with vendors that make it very hard for smaller firms to engage as analysts (because of unsophisticated tiering within AR), even though ironically there are examples of where such firms have engaged us as suppliers for bespoke research and consulting work (which is the reverse of your argument, Ludo). Either way, though, we can always form a view from customers, partners and so on.

    The real danger is when you get the odd smaller client who you get to know intimately as part of an engagement. You then need to be extremely disciplined about not giving them a disproportionate amount of air time in your commentary. I think we are pretty good at managing this as we are very aware of the danger, but it is an easy trap to fall into.

    1. Dale, you and I have the subtitles… Yes, some vendors are un-necessarily obnoxious and bullying. Nevetherless, some analysts either blatantly avoid covering non clients or don’t do a basic market scan when writing up landscape report or landmark evaluations. The causes are of course bandwidth but also sloppyness, complacency or questionable ethics.

      1. Interesting take, Ludo, but I don’t deal with enough other analyst firms on a day to day basis to comment in definitive terms.

        Having said this, I have to agree that a focus on integrity, quality and objectivity is increasingly becoming an impediment to doing business in the IT industry.

        5 years ago, for example, we would get quizzed a lot on our primary research methodology, our approach to sampling, quality control, etc as people really valued getting good solid insights from their research investment. Today, far fewer care and most of the primary research budgets have shifted to funding dumbed-down email based polls with horribly skewed samples made up of junior people who have filled out the survey for the chance to win an iPad or get a free Amazon voucher.

        The sad thing is that IT vendors then try to use this stuff for planning and message development, and end up spinning their wheels or failing to connect with buyers because the ‘research’ doesn’t truly represent the views and needs of real decision makers and influencers in their target market.

        And when we challenge some of the ludicrous claims made off the back of cheap and nasty polls (a lot of this stuff even gets presented to us in briefings as serious research), the less scrupulous hold up their hands and point to the press coverage (“gotta play the game”), those who don’t know any better look confused (“but surely surveys must reflect reality!”), and the more experienced guys who should know better simply avoid eye contact and try to change the subject quickly (“we just put that up to stimulate discussion”). Hmmm.

        Meanwhile, we are having to ask “Which bit of ‘we will not be mentioning or endorsing your product’ didn’t you understand?” much more frequently, on the basis that it was just assumed we were in the product promotion game based on experience with other firms.

        I could rant on about papers failing to declare sponsorship, research reports failing to disclose methodology (or even the sample composition), big analyst firms making PR statements off the back of research sitting behind a paywall, etc, but I think I have ranted enough 🙂 Suffice it to say that I do get the impression that standards are generally slipping. Result of economic pressure, perhaps?

        Now I am not claiming that anyone is perfect; we all have to make judgements on where to draw the line and we all get it wrong sometimes, but we should not give up caring about a lot of this stuff.

        And one thing I would come back to you on Ludo is that I see large firms doing a least as many questionable things as smaller ones from an integrity and objectivity perspective, and some vendors with deep pockets putting a lot of pressure on firms of all sizes to fall in line with their market-making activities.

  7. Pingback: Wrap-up: Netscout vs. Gartner re. Magic Quadrant positioning | The IIAR Blog

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.