What Makes a Good Equity Research Report?

Eight criteria professional investors use to evaluate research quality — and why most coverage fails at least three of them.

Most equity research is descriptive. It summarizes what a company does, reports what the financials show, and anchors to a price target that sits 10–15% above the current price. It tells you what you can find in the 10-K and what the Street already thinks. That is not research — it is documentation.

Good equity research is adversarial. It starts with a question the market is getting wrong and builds a structured argument for why. It doesn't just value the company — it explains the specific driver of variant return and maps the path to being proven right.

Here are the eight criteria that separate institutional-quality research from generic coverage. These are also the criteria a PM or analyst should use when evaluating third-party research.

1. Stated Variant Perception

This is the most important criterion and the one most often missing. Variant perception is a specific, material way the analyst's view differs from market consensus — and an explicit argument for why that difference is correct.

It is not enough to be bullish on a stock the Street is also bullish on. If your thesis is "the market is underestimating long-term revenue growth," you need to explain precisely why — what data supports a different view, what the consensus is missing, and what would change your mind.

A research report without variant perception is a description of a company, not an investment thesis. It has no information value for a portfolio manager who already reads SEC filings.

Test: Can you state the variant perception in one sentence? "The market is underestimating X because Y, and Z data supports a different conclusion." If you can't, the thesis isn't developed enough.

2. A Falsifiable Thesis

A good investment thesis must be capable of being proven wrong. If no evidence could change your conclusion, it isn't a thesis — it's a narrative.

Institutional-quality research defines the conditions under which the thesis fails. This forces intellectual honesty and gives the reader a clear framework for monitoring the position. It also serves as a discipline mechanism: if the bear case conditions materialize and you haven't updated the thesis, something is wrong with your process.

What this looks like in practice: "The bull case breaks if Azure quarterly growth falls below 25% for two consecutive quarters or gross margin on cloud services decelerates more than 200bps year-over-year."

Specific, observable, actionable. Not "if the business gets worse."

3. Multi-Method Valuation, Triangulated

A single valuation method is a single point of failure. A DCF alone is only as good as your terminal value assumptions. Comps alone are only as good as your peer selection. Any single method can be manipulated — consciously or unconsciously — to produce a desired conclusion.

Rigorous research uses at least three methods and triangulates them:

When the three methods agree, you have high confidence in the range. When they disagree materially, the report should explain why — which method is most appropriate for this company and this moment in the business cycle, and why the others are less reliable in this specific case.

A price target without a supporting valuation framework is an opinion, not analysis.

4. Explicit Scenario Modeling with Probability Weights

Bull/base/bear scenarios are only useful if they are (a) genuinely different from each other, (b) built on distinct, explicitly stated assumptions, and (c) assigned probability weights that sum to 100%.

The worst version of scenario modeling: three cases where the assumptions differ only in growth rates, the bear case price target is still above the current price, and no probability is assigned to any of them. This is theater, not analysis.

Good scenario modeling forces the analyst to think concretely about what actually has to be true for the bull case to materialize — and to honestly assess how likely that is. If the bull case requires a specific regulatory outcome, a competitor misstep, and margin expansion beyond anything in the company's history, the probability weight should reflect that.

The probability-weighted expected value of the three scenarios is the actual price target. Not the base case price target. Not the midpoint. The probability-weighted blend.

5. Substantive Risk Analysis

A risk section that lists "competition," "regulatory change," and "macro conditions" as risks is not risk analysis. Every company faces those. A list of generic categories with no probability or magnitude assessment adds nothing.

Good risk analysis does three things:

  1. Identifies specific risks — not "regulatory risk" but "the FTC's ongoing investigation into [specific practice] which has a precedent in [specific case] and could result in [specific outcome]"
  2. Estimates probability and magnitude — what is the probability this materializes in the next 12 months? If it does, what is the impact on intrinsic value?
  3. Identifies which risks are in the price — the market has already priced some risks. Only the risks the market is underweighting or mis-modeling create opportunity (in either direction).

A risk is only analytically useful if it affects the valuation. If the risk scenario doesn't flow through to a price target, the risk section is decorative.

6. Quantitative Peer Benchmarking

Competitive positioning analysis must be quantitative. "The company has strong competitive advantages" is an observation. A structured peer matrix showing EV/EBITDA at 18x vs. a peer median of 14x, gross margins of 72% vs. a peer median of 58%, and revenue growth of 24% vs. a peer median of 11% — that is analysis.

The peer set matters as much as the metrics. Gaming the peer group to make the subject company look cheap (or expensive) is a common failure mode. Peer selection should be documented and defensible: companies in the same industry, serving similar end markets, at a comparable stage of maturity.

Relative valuation without a clearly defined peer group is not relative valuation. It is cherry-picking.

7. Internal Logical Consistency

The single most common quality failure in equity research: the thesis, the valuation, and the risk section contradict each other.

Examples of internal inconsistency:

Internal consistency is the easiest quality check to run and the most revealing. A well-constructed research report should be able to withstand the question: "Do these sections tell the same story?"

8. A Catalyst Map

An investment thesis without a catalyst map is a valuation exercise, not a trade. Markets can remain mispriced for a long time. A good research report identifies the specific events that will cause the market to reprice — and provides an approximate timeline.

Catalysts should be specific and time-bounded:

Without a catalyst map, a portfolio manager cannot size the position appropriately, set a monitoring schedule, or know when to exit.


The Quick Evaluation Framework

When assessing any equity research report — from a sell-side analyst, a third-party provider, or your own team — run these eight questions:

# Question What Failure Looks Like
1Is the variant perception stated explicitly?Report agrees with the Street consensus; no differentiated view
2Is the thesis falsifiable?No conditions specified for being wrong
3Are three valuation methods used and triangulated?Single price target with no supporting framework
4Are scenario probability weights assigned?Three scenarios, no probabilities, bear case > current price
5Are risks specific, with probability and magnitude?Generic list: competition, regulation, macro
6Is peer benchmarking quantitative with a documented peer set?Narrative description of competitive position, no data
7Are the thesis, valuation, and risk sections consistent?Bullish thesis; DCF implies 5% upside; high-risk section
8Is there a catalyst map with dates?No specific upcoming events identified

A report that passes all eight is institutional quality. A report that fails four or more is documentation, not research.

Why Consistency of Methodology Matters

One underappreciated dimension of research quality is consistency across the coverage universe. A PM who uses different research providers for different names faces a problem: the "Strong Buy" on Company A and the "Strong Buy" on Company B mean completely different things if the analysts used different methodologies, different scenario frameworks, and different conviction thresholds.

Institutional research desks solve this with house style guides — standardized section structure, defined rating criteria, mandated scenario formats. The result is that a rating can be compared across analysts and sectors, because the underlying process was the same.

This is also why systematic, methodology-driven research has an advantage over ad-hoc coverage: when the framework is fixed, every output is directly comparable to every other output. The conviction rating on NVIDIA and the conviction rating on Pfizer were produced by the same eight-criterion process, weighted identically.

See the methodology applied. Semper Signum reports are built on the same 22-section framework described in this post — variant perception, three-method valuation, explicit scenario modeling, and catalyst maps on every covered name. Request a sample report →

← All Insights