What the Score Is — and Isn't
The El Dorado Score is a diagnostic instrument designed to estimate the share of recoverable revenue embedded within an independent hospitality operation's existing data. It measures reach: how far an operator can extend the value of what they already generate — messages, inquiries, bookings, reviews, calls — into measurable operational improvement.
The score is not a measure of operator quality, not a ranking mechanism, and not a forecast of revenue. It does not attempt to predict future performance. It is a structured estimate of present opportunity. It answers a narrower question: given the shape of this operation today, how much of its existing operational data can realistically be turned into improved conversion, sharper pricing, faster response, better language coverage, and recovery of currently-missed inquiries?
The score is diagnostic, not predictive. Operator-level, not comparative. Its output is a diagnosis, not a forecast.
Where the underlying calibration is a matter of judgement against qualitative evidence rather than precise statistical modelling, this document says so explicitly. Methodological transparency is part of the score's credibility, and the absence of false precision is part of the methodology, not a gap in it.
The Two Components

The El Dorado Score is composed of two equally weighted components, each contributing up to fifty points and summing to a total between zero and one hundred. The two components are deliberately separate because they capture different recovery levers, respond to different kinds of operator action, and unfold on different time horizons.
Automation Reach (up to 50 pts)
Automation Reach measures how much of an operator's day-to-day workload can be improved through faster response, better language coverage, clearer communication, and reduced manual handling of repeated questions. It reflects operational pressure — the volume, complexity, and fragmentation of guest interaction the operator currently absorbs.
Inputs that drive Automation Reach
High message volume relative to available time · Number of distinct guest languages · Number of active booking channels · Hours per week on guest communication
Automation Reach is the component most responsive to short-term action. Improvements in response timing, language coverage, and listing clarity translate into measurable gains within weeks. The curve is front-loaded by design — modest changes produce visible results quickly, after which gains plateau as core processes stabilise.
Analytical Reach (up to 50 pts)
Analytical Reach measures the depth of usable insight that exists in the operator's accumulated first-party data and the gap between that depth and the operator's current ability to read it. Where Automation Reach is about operational pressure, Analytical Reach is about data maturity — how much information has accumulated over the operator's history and how much of it is currently being exercised by tooling or analysis.
An operator with five years of operating history carries high Analytical Reach if that history sits unanalysed, and a more modest score if they already use a revenue management system that surfaces some of it. Analytical Reach grows with time. It compounds as data accumulates. Unlike Automation Reach, it responds slowly to change — but it represents the longer game and is the component most directly aligned with the El Dorado thesis.
Why separate them?
An operator scoring high on Automation Reach but lower on Analytical Reach is being told something specific: the immediate opportunity is in process improvements rather than deeper data analysis. An operator scoring high on Analytical Reach but lower on Automation Reach is being told the opposite. The diagnosis attached to each band is shaped by which component is the larger source of opportunity for the specific operator.
The Five Required Inputs
The score is calculated from five required inputs that an operator provides directly. Each was selected because it captures a structural aspect of the operation that materially shapes either Automation Reach, Analytical Reach, or both. The inputs are categorical rather than continuous — operator self-reporting is more reliable at the band level, and the underlying calibration does not justify continuous-input precision.
| Input | Bands | Primary contribution |
|---|---|---|
| Portfolio size | 1 / 2–5 / 6–20 / 20+ | Analytical Reach (cross-property data), Automation Reach (secondary) |
| Years operating | <1 / 1–2 / 3–5 / 5+ | Analytical Reach (data accumulation over time) |
| Revenue per property | Low / Medium / High (market-adjusted) | Both: demand intensity + marginal value of insight |
| Primary market | Country / sub-region | Calibrates sensitivity of other inputs; not scored directly |
| Booking channel mix | OTA-heavy / Balanced / Direct-heavy | Automation Reach (inquiry volume, language diversity) |
| All inputs are categorical bands — the underlying calibration does not support point-level precision. | ||
Portfolio size
Portfolio size is a proxy for both data volume and operational complexity. Larger portfolios generate richer datasets for cross-property comparison — the operator with twenty units can compare conversion patterns, language mix, and review themes across the portfolio in ways the single-property host structurally cannot.
Years operating
Years operating is one of the strongest single drivers of Analytical Reach because data accumulation is, in essence, a function of time. An operator in their first year does not yet have multi-year seasonal patterns to read. An operator at five-plus years has enough operational history that pattern recognition becomes the dominant available improvement lever.
Average annual revenue per property
Revenue per property acts as a proxy for demand intensity and booking value, feeding both components. Higher-revenue properties typically attract more frequent and higher-stakes inquiries (Automation Reach) and make the marginal value of any pattern recognition larger (Analytical Reach).
Primary market
Primary market does not directly add points to either component. It calibrates the sensitivity of the other inputs to the underlying market. The same number of distinct languages encountered means something different in Spain (high international volume, structurally constrained operator language coverage) than in the United States (dominant market language matches operator language for the bulk of demand).
Booking channel mix
OTA-heavy operations typically receive higher inquiry volume relative to booking volume, more language diversity in inquiries, and more fragmented conversation history across platforms — all feeding Automation Reach. They are also more likely to have data fragmented in ways that obscure patterns, feeding Analytical Reach as a secondary contribution.
The Three Optional Inputs
Three further inputs are surfaced behind a deeper analysis toggle in the calculator interface. They are not required because operators can produce a meaningful score without them — but where provided, they materially refine the diagnosis. Skipping them replaces a precise measurement with a market-and-portfolio average.
Languages encountered (→ Automation Reach)
Higher language diversity increases communication friction, raises the share of inquiries handled outside the operator's primary working language, and amplifies the recoverable wedge from improved language coverage. An operator who reports six languages, in a market where their primary coverage is two, is being told something specific: a meaningful share of inquiry volume arrives in languages they cannot handle to the same standard as their primary one.
Current data tooling (→ Analytical Reach)
The more advanced the tooling, the more of the operator's accumulated data is already being exercised, and the smaller the reach the score should attribute to currently unread insight. An operator with five years of history and no tooling carries a much higher Analytical Reach than an operator running a mature RMS. This is the most counterintuitive input in the framework: better tooling produces a lower component score — because the score measures unrealised opportunity, not absolute capability.
Hours per week on guest communication (→ Automation Reach)
The strongest single driver of Automation Reach when present. Skipping this input replaces a direct measure with an estimate based on portfolio size, channel mix, and revenue per property — which captures the broad shape of communication pressure but loses the operator-specific signal. The marginal contribution of each additional reported hour is largest in the lower range and diminishes at the upper end, where additional hours indicate severity but do not proportionally raise the recoverable opportunity.
How the Score Is Calculated
Each input contributes to Automation Reach and/or Analytical Reach through a weighted range. The weights are calibrated by judgement against operational-lever evidence — not by regression on a proprietary outcome dataset — and the document is explicit about that throughout. What follows are contribution ceilings rather than precise coefficients, because the underlying calibration justifies the ceilings but not the precision a regression-derived coefficient would imply.
| Component | Input | Contribution ceiling |
|---|---|---|
| Automation Reach | Hours/week on guest communication | ~20 pts (concave curve; largest marginal gain in lower range) |
| Automation Reach | Language diversity | ~12 pts |
| Automation Reach | Channel mix (OTA-heavy) | ~10 pts |
| Automation Reach | Portfolio size (secondary) | ~8 pts |
| Analytical Reach | Years operating | ~15 pts (largest marginal gain between years 1–3) |
| Analytical Reach | Portfolio size | ~12 pts |
| Analytical Reach | Tooling gap | ~12 pts (no tooling = upper end; mature RMS = lower end) |
| Analytical Reach | Revenue per property | ~8 pts |
| Analytical Reach | Channel mix (secondary) | ~3 pts |
| Total = Automation Reach + Analytical Reach, capped at 100. Both components are equally weighted at 50 points each. | ||
The total score is the simple sum of the two components. The two are equally weighted at 50 points each rather than asymmetrically because the operator-level evidence does not justify weighting one type of recoverable opportunity above the other across the population.
The Five Interpretive Bands
The combined score is presented in five bands, each labelled to align with the El Dorado thesis and each carrying distinct diagnostic content shaped by which component contributes more to the total.
| Score | Band | What it means |
|---|---|---|
| 80 – 100 | Extraction-ready | Strong signals, clear opportunities. Small improvements translate quickly into measurable gains. |
| 60 – 79 | High-yield | Substantial opportunity; focus on the dominant component first. |
| 40 – 59 | Untapped | Useful data has accumulated but is not being used effectively. |
| 20 – 39 | Surface-level | Limited data usage; most decisions rest on instinct. |
| 0 – 19 | Unmined | Data not yet deep enough to support pattern recognition. |
Extraction-ready (80+)
These operators already carry strong data signals and clear opportunities to act on them. The diagnosis emphasises specificity: pick the highest-frequency unanswered guest question and resolve it at the source within the next thirty days; identify the response-time cluster that aligns with the operator's highest-value inquiry hours; choose the most-frequent non-primary language and bring response quality up to primary-language standard.
High-yield (60–79)
Substantial recoverable opportunity sits in the operation, with focused changes available to unlock it. The diagnosis emphasises prioritisation: operators in this band typically have more visible opportunities than capacity to pursue. Start with whichever component scores higher — diminishing-returns curves are steepest when working against the dominant source of improvement.
Untapped (40–59)
Useful data has accumulated but is not being used effectively. The diagnosis emphasises pattern recognition as the entry point: operators in this band typically benefit more from beginning the practice of weekly questions of their own data than from any single targeted intervention. The structural constraint is the absence of a habit rather than the absence of opportunity.
Surface-level (20–39)
The operation is active, data usage is limited, and most decisions still rest on instinct rather than pattern recognition. The diagnosis is the most foundational of the five: establish baseline visibility before attempting deeper analytical work. Start with recurring-question identification and response-timing measurement.
Unmined (0–19)
The operator is generating data but it is not yet usable in a meaningful way — the typical position for genuinely early-stage operations. Establish basic data hygiene and revisit the score after six to twelve additional months of operation.
Three Operator Profiles
Three operator profiles illustrate how the framework behaves across different shapes of independent hospitality operation.
María — single property, Málaga
María has been operating for four years, working primarily in Spanish and English, with around 20% of her guest inquiries arriving in German, French, or Italian. Her bookings come predominantly through Airbnb and Booking.com. She uses no formal tooling beyond spreadsheets and reports spending roughly twelve hours per week on guest communication.
Automation Reach: 32 · Analytical Reach: 24 · Total: 56
Driven by high communication hours, meaningful language diversity, and OTA-heavy channel mix. The single-property scale limits comparative insight but four years of history is meaningful. María's diagnosis: language coverage and listing clarity on her most-asked questions are the highest-yield moves.
Carlos — twelve properties, Barcelona & Costa Brava
Carlos has six years of operating history, runs a balanced channel mix, uses a property management system supplemented by spreadsheets, and reports moderate weekly communication hours because much of the routine response work is handled by his small team.
Automation Reach: 26 · Analytical Reach: 36 · Total: 62
Six years of history and twelve properties produce a rich dataset. The PMS exercises some of it but not the deepest analytical layer. Carlos's diagnosis: comparative analysis across his portfolio is likely to surface property-level insights his current tooling does not surface.
Ana — three properties, Lisbon
Ana has been running for two years, works in a heavily multilingual market with five common guest languages, runs an OTA-heavy mix, uses no formal tooling, and reports high communication hours given her relatively small portfolio.
Automation Reach: 38 · Analytical Reach: 18 · Total: 56
Two years of history is thin for cross-period pattern recognition. Ana's recoverable opportunity is concentrated almost entirely in Automation Reach — communication and language coverage are the immediate levers. Analytical Reach becomes the dominant lever once she has another 12–24 months of history.
All three profiles land in similar bands but with very different underlying component shapes, and the diagnosis attached to each is correspondingly different. This is the framework working as intended.
What the Score Does Not Do
The El Dorado Score does not predict revenue outcomes for the operator who runs it. It is a diagnostic estimate of opportunity, not a forecast of results. It does not rank operators against one another — there is no leaderboard, no percentile placement, no comparative claim about performance.
It does not measure operational quality. An operator with strong fundamentals and a high-converting product can score modestly if their data sits unread; an operator with a smaller, more recently-established business can score well if their data is organised and their habit of asking questions of it is strong. It does not replace operator judgement or strategy.
Methodological notes
- Calibration is grounded in published operational-lever findings, not regression against a proprietary outcome dataset.
- Validated for independent operators in European and US markets only. Outside these markets, use directionally.
- Bands are deliberately wide — the choice between a 56 and 58 score is not the kind of distinction the framework is designed to support.
- Optional inputs improve precision but the score functions meaningfully without them.
