Measures and constructs
The estimator measures one construct: accumulated cognitive reserve, defined as the lifetime cognitive resilience built through education, occupational complexity, cognitive leisure activity, social engagement, multilingualism, and physical activity with cognitive demand. The construct is operationalized through six domains, each scored 0–100, then combined into a global Cognitive Reserve Index (CRI) on a 0–100 scale.
The construct is treated as a proxy-based reserve measure — distinct from residual-based reserve measures that require neuroimaging or biomarkers. Proxy-based measures, including the Cognitive Reserve Index questionnaire (Nucci 2012, Mondini 2023), the Lifetime of Experiences Questionnaire (Valenzuela & Sachdev 2007), and the Cognitive Reserve Assessment Scale in Health (Lavrencic 2022), capture self-reported behavior across reserve-building domains. The 2022 Kartschmit systematic review of cognitive-reserve questionnaires identified 17 instruments in this category, with substantial overlap in core constructs but variation in domain coverage and weighting.
What this estimator does not measure: current cognitive function (no memory, attention, processing-speed, or executive-function testing), brain pathology (no biomarkers or imaging), individual genetic risk factors (no APOE genotype or polygenic risk score), and brain maintenance (a related but distinct construct that pure aerobic exercise primarily contributes to). The Stern 2020 consensus whitepaper formally separates these constructs, and we treat them as out of scope for this estimator.
Instrument structure
The estimator is a 27-item self-report instrument administered as a 7-step web wizard. Items are distributed across six domains as follows:
| Domain | Item count | Item types | Weight |
|---|---|---|---|
| Education | 2 | 1 single-select (10 levels), 1 multi-select (4 options) | 0.28 |
| Occupational complexity | 1–8 | Up to 4 occupation entries × 2 sub-items each (single-select + numeric) | 0.22 |
| Cognitive leisure | 7 | 7 single-select frequency ratings (5-tier) | 0.20 |
| Social engagement | 3 | 1 single-select (5 options), 1 multi-select (6 options), 1 single-select (5 tiers) | 0.14 |
| Multilingualism | 1–3 | 1 single-select count, 2 conditional single-selects (use, age of acquisition) | 0.09 |
| Physical with cognitive demand | 1–2 | 1 multi-select (6 options), 1 conditional single-select (4 tiers) | 0.07 |
Items are distributed across 6 wizard screens (after the interstitial). Conditional sub-items appear only when the parent item meets specific criteria — multilingualism use and age questions appear only when ≥2 languages are reported; physical activity frequency appears only when at least one qualifying activity is selected.
Estimated administration time: 4–6 minutes for typical users. Variance is largely driven by the number of occupation entries provided (1 occupation: ~4 min; 4 occupations: ~6 min).
Scoring algorithm — full pseudocode
The scoring algorithm is fully documented below. Each domain produces a score in the 0–100 range, then domain scores are combined via a weighted average to produce the global CRI.
Domain 1 — Education
// Item 1.1 lookup table (sub-points for highest-degree-completed)
EDU_LEVEL_POINTS = {
no_schooling: 0, some_primary: 12, primary_complete: 25,
some_secondary: 40, secondary_complete: 55,
some_post_secondary: 65, associates: 70,
bachelors: 78, masters: 88, doctoral: 100
}
// Item 1.2 — additional structured learning (multi-select, +5 each, cap +20)
EDU_EXTRA_POINTS = {
certifications: 15, // representing "substantial certifications" (cap)
self_directed: 5,
apprenticeship: 5,
postdoc: 5
}
domain_1_education = min(
EDU_LEVEL_POINTS[user_level] + min(sum(EDU_EXTRA_POINTS[selected]), 20),
100
)
Domain 2 — Occupational complexity
// Each occupation entry: class score × years
OCC_CLASS_POINTS = { class_1: 20, class_2: 40, class_3: 60, class_4: 80, class_5: 100 }
total_years = sum(years across all occupation entries)
total_contribution = sum(class_score × years for each occupation)
if total_years > 0:
domain_2_occupation = round(total_contribution / total_years)
else:
domain_2_occupation = 0 // no work history; legitimate, no penalty in framing
Domain 3 — Cognitive leisure
// 7 activities, each scored on 5-tier frequency, then summed and capped
LEISURE_FREQ_POINTS = {
never: 0, occasionally: 5, monthly: 10, weekly: 16, daily: 20
}
domain_3_leisure = min(
sum(LEISURE_FREQ_POINTS[freq_response] for each of 7 activities),
100
)
// Maximum theoretical sum without cap = 7 × 20 = 140
// Cap at 100 means: 5+ activities at high frequency reaches the maximum
// This rewards diversity over single-activity gaming
Domain 4 — Social engagement
// Three sub-items with internal weights (0.40, 0.30, 0.30)
CONFIDANTS_POINTS = { 0: 0, 1: 25, '2-3': 50, '4-6': 75, '7+': 100 }
GROUP_POINTS_PER = 20 // each membership selected, capped at 100
SOCIAL_FREQ_POINTS = { rarely: 0, monthly: 30, weekly: 60, several_per_week: 85, daily: 100 }
confidants_score = CONFIDANTS_POINTS[item_4_1]
groups_score = min(count_of_selected_groups × 20, 100) // 'none' selection clears all
freq_score = SOCIAL_FREQ_POINTS[item_4_3]
domain_4_social = round(
confidants_score × 0.40 + groups_score × 0.30 + freq_score × 0.30
)
Domain 5 — Multilingualism
// Three-factor multiplicative: count × use × age
LANG_COUNT_POINTS = { 1: 0, 2: 50, 3: 80, '4+': 100 }
LANG_USE_MULTIPLIER = { rare: 0.30, occasional: 0.60, regular: 0.85, daily: 1.00 }
LANG_AGE_MULTIPLIER = { childhood: 1.00, adolescence: 0.85, early_adult: 0.70, mid_late: 0.60 }
if lang_count_points == 0:
domain_5_multilingual = 0
else:
domain_5_multilingual = round(
lang_count_points × use_multiplier × age_multiplier
)
Domain 6 — Physical activity with cognitive demand
// Multi-select activity points + conditional frequency multiplier
PHYS_ACTIVITY_POINTS = {
dance: 30, martial_arts: 30, team_sports: 25, fine_motor: 20, mindful: 15
}
PHYS_FREQ_MULTIPLIER = { rare: 0.30, occasional: 0.50, regular: 0.75, frequent: 1.00 }
if 'none' selected:
domain_6_physical = 0
else:
raw = min(sum(PHYS_ACTIVITY_POINTS[selected]), 100)
domain_6_physical = round(raw × frequency_multiplier)
Global Cognitive Reserve Index
WEIGHTS = {
education: 0.28, occupation: 0.22, leisure: 0.20,
social: 0.14, multilingual: 0.09, physical: 0.07
}
// Weights sum to exactly 1.00
global_CRI = round(
domain_1_education × 0.28 +
domain_2_occupation × 0.22 +
domain_3_leisure × 0.20 +
domain_4_social × 0.14 +
domain_5_multilingual × 0.09 +
domain_6_physical × 0.07
)
// Result: integer in range 0-100
Score-band assignment
function get_band(cri):
if cri <= 24: return 'Foundational'
if cri <= 44: return 'Developing'
if cri <= 64: return 'Established'
if cri <= 79: return 'Strong'
return 'Exceptional'
Modifiable-factors callout selection
// Surface 2-3 lowest-scoring modifiable domains
// Education and occupation excluded (largely fixed in adulthood)
modifiable_keys = ['leisure', 'social', 'multilingual', 'physical']
sorted = sort_ascending(modifiable_keys by domain_score)
if any modifiable score < 70:
surface = sorted[0:3] // 3 lowest
else:
surface = sorted[0:2] // 2 lowest
// Each surfaced domain shows evidence-anchored guidance text
Validation strategy
This is an original synthesis instrument rather than an implementation of an externally-validated questionnaire. Validation is staged across four levels:
Level 1: Construct validity through evidence-base derivation
Each domain weight is derived from pooled effect sizes in the meta-analytic literature. Education's 0.28 weight reflects the largest effect sizes in cognitive reserve research (OR 2.61 prevalence in Meng & D'Arcy 2012, n=437,477; PAF 9.3% weighted in Mukadam 2024). Each subsequent weight is derived analogously. The full weight derivation is documented in Section 2 of the tool page. Construct validity rests on the proposition that reserve-building factors with larger effect sizes in the underlying research should carry larger weights in the composite — the alternative (equal weighting) would be empirically inconsistent with the evidence.
Level 2: Face validity through synthetic-profile sanity testing
Four synthetic user profiles spanning the engagement spectrum were tested through the scoring algorithm during development. Profile A (recent immigrant, limited formal education, strong community and multilingualism) → CRI 39, Developing. Profile B (US-typical mid-career professional, monolingual, moderate engagement) → CRI 59, Established. Profile C (high-engagement retired professor, broad domain coverage) → CRI 92, Exceptional. Profile D (minimal engagement across all domains) → CRI 20, Foundational. The distribution check confirmed expected ordering (D < A < B < C), full 0–100 range was reachable, and no scoring pathology emerged.
Level 3: Convergent validity (planned, future)
Convergent validity assessment is planned for v2.0 of the instrument. The plan involves administering both the LBL-CRE and the s-CRIq (Mondini 2023) to a sample of approximately 400 participants and computing rank-order correlation. Hypothesized correlation: r = 0.65–0.80 on the basis of overlapping domain coverage (education, occupation, leisure are common to both; multilingualism, social, physical are unique to LBL-CRE). The s-CRIq is the most-validated proxy reserve instrument and serves as an appropriate convergent benchmark. This validation is not yet conducted and is not claimed.
Level 4: Predictive validity (planned, long-term)
Predictive validity — whether LBL-CRE scores predict cognitive trajectory — would require a longitudinal study with cognitive outcomes at 5+ year follow-up. This is a research-grade undertaking outside the scope of an open-access consumer tool. It is documented here as a future direction, not a current claim.
What this means for users: The tool's scoring is internally consistent and evidence-derived but has not been validated against the gold-standard s-CRIq or against longitudinal cognitive outcomes. Users should treat the score as a structured self-assessment of accumulated reserve-building behavior, not as a clinically validated test.
Score-band derivation
The five score bands (Foundational 0–24, Developing 25–44, Established 45–64, Strong 65–79, Exceptional 80–100) are author-chosen for accessibility. They are not empirically-derived percentile cuts in a reference population, and they are not clinical risk strata.
Why these specific cutoffs
The cutoffs reflect a center-symmetric distribution around CRI 55 (the conceptual center of "typical accumulation"). The Foundational cut at 24 reflects scores reachable only with very low scores across all six domains. The Established band (45–64) is centered on CRI 55, providing a 20-point band that captures average accumulated reserve. Strong (65–79) reflects above-average accumulation requiring deliberate domain engagement. Exceptional (80–100) requires sustained high engagement across multiple domains; in the synthetic profile testing, only Profile C (high-engagement retired professor with PhD, sustained Class 5 work, daily leisure across multiple activities) reached this band.
What the bands do not represent
The bands are not population percentiles. We do not have norm-referenced data for the LBL-CRE in any reference population. Users in the "Strong" band should not infer that 80% of the general population scores below them — that inference would require population calibration that has not been conducted. The bands are descriptive shorthand for the score range, calibrated against the synthetic profile testing, not against measured population distributions.
Why the bands have neutral, descriptive labels
Earlier drafts considered numerical labels ("Tier 1, Tier 2, ..."), competitive labels ("Average, Above Average, ..."), or clinical-sounding labels ("Below threshold, Within normal range, ..."). All were rejected. Numerical labels obscure interpretation. Competitive labels imply percentile-norming we have not conducted. Clinical labels misrepresent the buffer concept as a risk stratum. The chosen labels (Foundational, Developing, Established, Strong, Exceptional) describe accumulation state without implying risk or comparison to a norm — matching the buffer-not-risk-score framing throughout the tool.
Limitations
The estimator inherits limitations common to the proxy-based reserve literature, plus several specific to its construction.
Self-report limitations
Like all questionnaire-based reserve instruments, the LBL-CRE relies on self-report. Self-report is subject to recall bias (especially for occupation and leisure questions covering multiple decades), social-desirability bias (over-reporting cognitively-valued activities), and self-classification error (especially in the work-complexity classification). The 2022 Kartschmit systematic review noted these limitations apply to the entire questionnaire-based reserve literature.
Domain-specific limitations (carried forward from tool page)
Education effect sizes vary by region and cohort (Lancet 2024 PAF 5–7% in HIC, up to 15.4% in LMIC); the instrument uses an estimate within this range. Occupational classification is coarsened to 5 classes rather than the s-CRIq's 6,000-job ISCO-08 lookup, trading precision for usability. Leisure self-report has the recall-bias issue noted above. Multilingualism literature has substantial methodological heterogeneity (de Bruin 2015 noted publication bias). Physical-activity scoring deliberately excludes pure aerobic activity, which conflates with brain maintenance rather than reserve.
Construct-level limitations
The estimator does not capture all reserve-relevant factors identified in the literature. Musical training across the lifespan (Hanna-Pladdy 2011), early-life cognitive stimulation prior to formal schooling, premorbid IQ, and certain personality traits (notably conscientiousness and openness to experience) are not directly captured. The instrument operationalizes the most well-evidenced and commonly-measured proxies; it does not claim completeness.
Validity limitations
The estimator has not been validated against the s-CRIq or other gold-standard proxy reserve instruments (planned for v2.0). It has not been longitudinally validated against cognitive outcomes (research-grade undertaking outside the scope of a consumer tool). Score interpretation should be calibrated accordingly: this is a structured self-assessment of reserve-building behavior, not a clinically validated predictor.
Bands are not percentiles
As noted in Section 5, the score bands are author-chosen for accessibility, not derived from a reference population. Users should not infer percentile rank from band membership.
Independent review
The instrument design and methodology have not yet undergone formal external peer review. The development process included internal review against the cognitive reserve literature documented in Section 3 of the tool page (12 primary references and approximately 100 supporting studies cited across the underlying meta-analyses).
We welcome correspondence from researchers in the cognitive reserve, dementia prevention, and cognitive aging fields. Substantive critique can be submitted via the corrections form. Where critique identifies methodological errors or evidence-base updates, the instrument will be revised in subsequent versions, with changes documented in the version log below.
Version log
-
v1.0
May 4, 2026 · Initial release
Six-domain instrument with 27 items. Evidence-weighted scoring (education 0.28, occupation 0.22, leisure 0.20, social 0.14, multilingualism 0.09, physical 0.07). Five-band score interpretation (Foundational, Developing, Established, Strong, Exceptional). Modifiable-factors callout surfacing 2–3 lowest-scoring modifiable domains with evidence-anchored guidance. Synthetic-profile sanity testing across four engagement profiles passed.
Key terms
Brain reserve. The static structural capacity of the brain — neuron count, synaptic density, brain volume — that varies on the basis of genetic and developmental factors and is largely fixed in adulthood (Stern 2020).
Brain maintenance. The preservation of brain integrity in older age — slower atrophy, fewer lesions, intact white matter (Stern 2020). Aerobic exercise primarily contributes to this construct.
Cognitive reserve. The dynamic, modifiable resource: the brain's learned capacity to use existing networks more efficiently, or to recruit alternative networks when primary ones fail (Stern 2002, 2009, 2020). This is what the LBL-CRE estimates.
Proxy-based reserve measures. Lifestyle and demographic variables used as indicators of reserve accumulation. The CRIq (Nucci 2012), LEQ (Valenzuela & Sachdev 2007), and the LBL-CRE are proxy-based instruments.
Residual-based reserve measures. Cognitive performance residual after accounting for brain pathology measured by neuroimaging or biomarkers. More precise than proxy-based measures but require imaging data.
Population attributable fraction (PAF). The proportion of dementia cases in a population that could be prevented if a specific risk factor were eliminated. The Lancet 2024 Commission identifies 14 modifiable risk factors with combined PAF approximately 45%.
Hazard ratio (HR). The ratio of the hazard rates corresponding to two groups (e.g., exposed vs unexposed). HR < 1 indicates protection; HR = 1 indicates no effect; HR > 1 indicates increased risk.
Odds ratio (OR). The ratio of the odds of an outcome in two groups. Like HR, OR < 1 indicates protection.
Spanish translation status
A Spanish-language version of the LBL-CRE is planned for late 2026. The translation will follow the Spanish-2 protocol used across LifeByLogic tools: initial translation by a native Spanish-speaking translator with review by a native Spanish-speaking content reviewer, followed by validation against equivalent Spanish-language cognitive reserve research (e.g., the Spanish CRIq validation work by Rami and colleagues). The Spanish version will be hosted at /es/brain-lab/estimador-de-reserva-cognitiva/ when released.
Items requiring careful translation include the 5-class occupation self-classification (where job-class examples need locally-relevant Spanish-speaking-population anchors rather than direct translation of US job titles), the cultural-activities item in the leisure domain (which varies meaningfully across cultures), and the multilingualism domain itself (where the framing of "second language" differs in Spanish-speaking populations where bilingualism is normative).
Methodology FAQ
Recurring questions about the methodology, with concrete answers grounded in the evidence base. These are not the same as the user-facing FAQ on the tool page — these address technical and methodological questions a researcher, clinician, or careful reader would ask.
How were the six domain weights derived?
Each weight is derived from pooled effect sizes in the meta-analytic literature for that domain. Education's 0.28 reflects OR 2.61 (prevalence) and 1.88 (incidence) in Meng & D'Arcy 2012 (n=437,477) and pooled PAF 9.3% in Mukadam 2024. Occupational complexity's 0.22 reflects HR 0.95 in Hyun 2022 plus 28% mediation of education's effect through occupation in COSMIC. Cognitive leisure's 0.20 reflects HR 0.58 in Su 2022 (n=2,154,818, 38 studies) and HR 0.93 per activity-day per week in Verghese 2003 NEJM. Social engagement's 0.14 reflects HR 0.65–0.71 in Sommerlad 2023 ELSA. Multilingualism's 0.09 reflects 4–5 year delay in dementia onset across multiple meta-analyses. Physical activity with cognitive demand's 0.07 reflects evidence specifically distinguishing cognitive-component activity from pure aerobic exercise (Klimova 2020, Verghese 2003). The weights sum to exactly 1.00 with the top three domains (education, occupation, leisure) carrying 70% of the score, matching their dominance in the evidence base.
Why six domains rather than the three used in the CRIq?
The CRIq's three-domain structure (education, work, leisure) is well-validated but does not capture the full set of contributors identified in the cognitive reserve literature. Multilingualism produces a 4–5 year delay in dementia onset across multiple meta-analyses but is not a CRIq domain. Social engagement appears as a distinct Lancet Commission risk factor with its own effect size (HR 0.65–0.71 in joint models that already control for cognitive activity). Physical activity with cognitive demand has evidence distinct from pure aerobic activity (Verghese 2003: cognitive HR 0.93, physical HR 1.00). Capturing each as a distinct domain is more faithful to the underlying evidence than collapsing them into the CRIq's three categories. The six-domain structure is also closer to the Lavrencic 2022 CRASH instrument, which explicitly identified multidimensional structure beyond the CRIq's three domains.
Why are pure aerobic activities like running excluded from Domain 6?
The Stern 2020 consensus whitepaper formally separates cognitive reserve from brain maintenance. Pure aerobic activity contributes substantially to brain maintenance — preservation of brain integrity, reduced atrophy, improved cerebrovascular health (Erickson 2011, Northey 2018) — but the evidence specifically for cognitive reserve favors activities combining physical and cognitive demand. Verghese 2003 NEJM is the cleanest empirical evidence for this distinction: cognitive-activity scores predicted dementia risk reduction (HR 0.93 per activity-day per week, 95%CI 0.90–0.97) while physical-activity scores measured in the same instrument did not (HR 1.00). Klimova 2020 and Bhuachalla 2020 confirmed: dance and learning-intensive moderate-intensity activity outperformed pure aerobic exercise for cognitive outcomes. Including pure aerobic activity in Domain 6 would conflate two mechanisms the field carefully distinguishes. A user who runs 30 miles per week scores 0 on Domain 6, which is correct given the construct definition; their aerobic exercise contributes to brain maintenance, a related but distinct construct outside this estimator's scope.
Are the score bands percentile-norms?
No. The five score bands (Foundational 0–24, Developing 25–44, Established 45–64, Strong 65–79, Exceptional 80–100) are author-chosen for accessibility and centered around CRI 55 as the conceptual midpoint of typical accumulation. They are not empirically-derived population percentiles and not clinical risk strata. We do not have norm-referenced data for the LBL-CRE in any reference population. A user in the "Strong" band should not infer that 80% of the general population scores below them — that inference would require population calibration that has not been conducted. The bands are descriptive shorthand for the score range, calibrated against synthetic profile testing during development, not against measured population distributions.
What is the validation status of the LBL-CRE?
Validation is staged across four levels. Level 1 (construct validity through evidence-base derivation) is complete — every domain weight is derived from cited meta-analytic effect sizes documented in Section 3 of the tool page. Level 2 (face validity through synthetic-profile sanity testing) is complete — four synthetic profiles spanning the engagement spectrum produce expected ordering with full 0–100 range reachable. Level 3 (convergent validity against the s-CRIq) is planned for v2.0 with a proposed n=400 sample and hypothesized r=0.65–0.80 based on overlapping domain coverage with the CRIq. Level 4 (predictive validity through longitudinal follow-up) is documented as a future research direction, not a current claim. Users should treat the score as a structured self-assessment of accumulated reserve-building behavior, not as a clinically validated test.
What is the difference between proxy-based and residual-based reserve measures?
Proxy-based reserve measures use lifestyle and demographic variables as indicators of reserve accumulation. The CRIq (Nucci 2012), LEQ (Valenzuela & Sachdev 2007), CRASH (Lavrencic 2022), and the LBL-CRE are proxy-based instruments. Residual-based reserve measures compute reserve as the cognitive performance residual after accounting for brain pathology measured by neuroimaging or biomarkers. Residual-based measures are more precise: Nelson 2021's meta-analysis found pooled HR 0.62 for residual-based reserve and HR 0.48 for proxy-based reserve in predicting MCI or dementia incidence. Proxy-based measures trade some precision for accessibility — the LBL-CRE is administrable in 4–6 minutes by anyone with a web browser; residual-based measures require MRI imaging.
How does the years-weighted occupational scoring handle edge cases?
The occupation domain computes a years-weighted average of class scores rather than a sum, matching the CRIq's lifetime-accumulation logic. This means someone with 30 years at Class 4 (score 80, contribution 80×30=2400, average 80) outscores someone with 2 years at Class 5 (score 100, contribution 100×2=200, average 100 — but only because their total years are 2). Users with no significant work history score 0 on this domain; the result framing notes this without penalty since many legitimate trajectories (early-career, students, full-time caregivers, retirees with brief work history) produce low occupation scores, and reserve from other domains can compensate. Users currently employed count years to present.
Why is the leisure domain capped at 100 rather than allowed to sum freely?
Maximum theoretical sum without cap = 7 activities × 20 points (daily frequency) = 140. Cap at 100 means engaging in five or more activities at moderate-to-high frequency reaches the maximum. This rewards diversity of cognitive engagement over single-activity gaming, matching evidence from Sommerlad 2023's ELSA analysis (n=7,917) showing high-engagement profiles across multiple activities had HR 0.58 for dementia, with diminishing marginal returns at very high single-activity frequency. The cap also prevents the leisure domain from disproportionately dominating the global CRI when the user reports very high engagement.
How is the modifiable-factors callout selected?
The callout surfaces the two to three lowest-scoring modifiable domains. Education and occupational complexity are excluded from this surfacing because they are largely fixed in adulthood — surfacing them would suggest action items the user cannot realistically take. The four modifiable domains (cognitive leisure, social engagement, multilingualism, physical activity with cognitive demand) are sorted ascending by score; if any score is below 70, the callout shows the three lowest; otherwise it shows the two lowest. Each surfaced domain receives evidence-anchored guidance text drawing on the meta-analytic literature for that domain. The 2024 Lancet Commission and Liu 2024 meta-analysis (n=27 longitudinal studies) both demonstrate that late-life reserve-building activity has measurable protective effects, supporting the framing of these domains as actionable.
What instruments and frameworks were considered but not adopted?
We considered direct implementation of the s-CRIq (Mondini 2023) but rejected this because the s-CRIq's online tool requires browsing a database of approximately 6,000 occupations to assign each user to one of five cognitive-load classes — intolerable friction for a self-administered consumer tool. We considered the Lifetime of Experiences Questionnaire (LEQ; Valenzuela & Sachdev 2007) which adds time-stratified scoring across young-adult, mid-life, and late-life periods; we incorporated the time-stratified concept conceptually (the modifiable-factors callout reflects mid-life-and-beyond modifiability) but did not adopt the LEQ's specific item structure. We considered the Cognitive Reserve Assessment Scale in Health (CRASH; Lavrencic 2022); the CRASH's identification of multidimensional structure beyond the CRIq's three domains directly supports our six-domain extension. The Kartschmit 2022 systematic review of cognitive reserve questionnaires identified 17 instruments in this family; the LBL-CRE is positioned as an open-access, browser-based synthesis intended for general-public use rather than research administration.
Last reviewed
Last reviewed: May 4, 2026.
Next scheduled review: November 4, 2026.
Triggers for unscheduled review: publication of major meta-analysis updating effect sizes for any of the six domains; identification of methodological errors via corrections; updates to the Lancet Commission on Dementia Prevention.
For corrections, methodological critique, or research collaboration inquiries: submit via the corrections form.
How to cite this methodology
If you reference this methodology page in academic work, journalism, blog posts, or other publications, please cite it. The corporate author is LifeByLogic; the current version is 1.0 (2026-05-04). Choose the citation style appropriate for your venue.
@misc{lbl_cognitive_reserve_estimator_2026,
author = {{LifeByLogic}},
title = {{Cognitive Reserve Estimator Methodology}},
year = {2026},
version = {1.0},
publisher = {{LifeByLogic}},
howpublished = {Interactive web tool},
url = {https://lifebylogic.com/brain-lab/cognitive-reserve-estimator/methodology/},
note = {Accessed: May 4, 2026}
}
Note on authorship: LifeByLogic is the corporate author. Individual contributors are credited on the about page: this methodology was written by Abiot Y. Derbie, PhD, and reviewed by Eskezeia Y. Dessie, PhD. For non-academic citations (journalism, blog posts), citing “LifeByLogic” is appropriate; for academic citations, the formats above are the recommended structure.