0% found this document useful (0 votes)
181 views9 pages

The Science of Prognosis in Psychiatry A Review

Body mind theory

Uploaded by

Jonty Arputhem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
181 views9 pages

The Science of Prognosis in Psychiatry A Review

Body mind theory

Uploaded by

Jonty Arputhem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Clinical Review & Education

JAMA Psychiatry | Review

The Science of Prognosis in Psychiatry


A Review
Paolo Fusar-Poli, MD, PhD; Ziad Hijazi, MD, PhD; Daniel Stahl, PhD; Ewout W. Steyerberg, PhD

Related article
IMPORTANCE Prognosis is a venerable component of medical knowledge introduced by Supplemental content
Hippocrates (460-377 BC). This educational review presents a contemporary evidence-based
approach for how to incorporate clinical risk prediction models in modern psychiatry.
The article is organized around key methodological themes most relevant for the science of
prognosis in psychiatry. Within each theme, the article highlights key challenges and makes
pragmatic recommendations to improve scientific understanding of prognosis in psychiatry.

OBSERVATIONS The initial step to building clinical risk prediction models that can affect
psychiatric care involves designing the model: preparation of the protocol and definition of
the outcomes and of the statistical methods (theme 1). Further initial steps involve carefully
selecting the predictors, preparing the data, and developing the model in these data.
A subsequent step is the validation of the model to accurately test its generalizability (theme
2). The next consideration is that the accuracy of the clinical prediction model is affected by
the incidence of the psychiatric condition under investigation (theme 3). Eventually, clinical
prediction models need to be implemented in real-world clinical routine, and this is usually
the most challenging step (theme 4). Advanced methods such as machine learning
approaches can overcome some problems that undermine the previous steps (theme 5).
The relevance of each of these themes to current clinical risk prediction modeling in
psychiatry is discussed and recommendations are given.
Author Affiliations: Author
affiliations are listed at the end of this
CONCLUSIONS AND RELEVANCE Together, these perspectives intend to contribute to an article.
integrative, evidence-based science of prognosis in psychiatry. By focusing on the outcome Corresponding Author: Paolo
of the individuals, rather than on the disease, clinical risk prediction modeling can become the Fusar-Poli, MD, PhD, Department of
cornerstone for a scientific and personalized psychiatry. Psychosis Studies, Institute of
Psychiatry, Psychology and
Neuroscience, Main Building, 5th
JAMA Psychiatry. doi:10.1001/jamapsychiatry.2018.2530 Floor, PO63, 16 De Crespigny Park,
Published online October 17, 2018. SE5 8AF London, United Kingdom
(paolo.fusar-poli@kcl.ac.uk).

It appears to me a most excellent thing for the physician to cultivate Prognosis is foreseeing and foretelling1: essentially it is to make a pre-
Prognosis; for by foreseeing and foretelling, in the presence of the sick, diction of the risk of future events in individual patients (or groups9-11)
the present, the past and the future. [...] And he will manage the cure and the stratification of patients by these risks.12,13 However, while
best who has foreseen what is to happen from the present state Hippocratic prognosis was a subjective art that was entirely based
of matters. on the physician’s intuition,6,14 modern advancements of knowl-
Hippocrates, The Book of Prognostics1(p1) edge in the field of clinical risk prediction modeling have allowed
consolidation of an evidence-based science of prognosis.15

I
n the Clinical Challenge2 in this issue of JAMA Psychiatry, we As exemplified in the Clinical Challenge,2 clinical risk prediction
describe the case of an adolescent boy who presented with mild models (also referred to as prognostic or predictive models [further
paranoid symptoms along with poor functioning.3 Prognostic discussed later], prediction rules, nomograms, or risk scores13,16) are
evaluation confirmed an enhanced risk of developing serious men- vital to guiding health care professionals in their decision making about
tal disorders such as psychosis.4,5 The concept of prognosis (from (1) additional testing, (2) initiating or withholding treatments (eg, pre-
Greek foreknowledge: pro, before and gnosis, knowledge) has been ventionofpsychosisinthoseatrisk17),and(3)informingpatientsabout
introduced by Hippocrates of Kos (460-377 BC), the father of medi- their risk of developing a particular outcome (eg, in the context of
cine, on the blurred borderline between Greek philosophy and shared decision making).18-20 In the recent (2016-2018) psychiatric
biology. Prognosis, rather than diagnosis, was the main objective of literature, clinical risk prediction models have been developed and vali-
Hippocratic medicine: “there is no such thing as disease; there are dated to forecast the onset of emerging mental disorders in those at
individuals who fall ill.”6 This contrasts with modern medicine,7 which, risk,21-27 the likelihood of clinical remission,28 the occurrence of seri-
since the foundation of a new pathology on an anatomical basis (16th ous outcomes such as chronicity, 29,30 offending behaviors, 31
century8), has made diagnosis the cornerstone of medical science.6 suicide,32,33 or the response to treatments.34,35 This educational

jamapsychiatry.com (Reprinted) JAMA Psychiatry Published online October 17, 2018 E1

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


Clinical Review & Education Review The Science of Prognosis in Psychiatry

the values of the predictors. The starting point for building clinical
Table 1. Developing Good Clinical Risk Prediction Models in Psychiatry
risk prediction models is the definition of an a priori–defined
Do Do Not research protocol.36 The protocol should unambiguously define12
If available, use a priori knowledge such as Use a small development
evidence synthesis methods to select data set to select predictors a clinically meaningful outcome to be forecasted,37 such as the on-
predictors and prespecify models.15 through stepwise methods set of the severe psychiatric disorders illustrated in the Clinical
with standard, low
P values.15,43 Challenge.2 Within clinical risk prediction models, prognostic mod-
Develop the model using a large Develop the model using els are used to forecast outcomes independent of treatments (eg,
high-quality data set.36 small convenience samples. in untreated individuals), while predictive models are used to
Develop the model on the basis of an Develop the model through
a priori study protocol with a sound fishing expeditions and data forecast treatment-dependent outcomes.38 In practice, the prog-
statistical analysis plan.36 torturing to overfit the nostic vs predictive effects are difficult to disentangle outside ran-
best model.
domized designs.38 Defining the outcome determines the estima-
For prespecified models and binary Add as many predictors as
outcomes, allow EPV>20; if 20<EPV<10, you can to best fit your data tion method that is used, which can be broadly classified in 2 groups:
consider shrinkage corrections. If EPV<10, with statistical testing for
if a priori knowledge is not available, or if inclusion, leading to
(1) statistical regression methods (the most widely used) and
data are high dimensional, consider machine testimation bias, overfitting, (2) machine learning methods (see section below), although there
learning methods for prediction modeling.15 and optimism.15,43
is no clear distinction between the 2.39 Within regression methods,
Consider the impact of the incidence of the Assume your prognostic
condition: low-incidence binary predictors accuracy is independent from the most frequent ones15 include logistic regression for binary
or outcomes aggravate the EPV issue.44,45 the incidence of the condition. outcomes (eg, new onset of depression26 or common mental
Estimate model optimism through internal Report only apparent model
validation methods.15 performance measures in the disorders, 27 occurrence of violent offending,31 persistence of
development database. depressive symptoms,28 late-life depression30) and the Cox pro-
Perform external validation in independent Not plan an external portional hazard regression for survival data (eg, time to the new
data sets with adequate number of events validation study before
(>10046) or sample size calculations for clinical implementation. onset of psychosis 2 1 , 2 3 -2 5 ). Longitudinal cohort studies
precision of estimates.47 (retrospective21,24,28 or prospective23,25,27,30,32), which can also
At the stage of model building, consider Overlook pragmatic
real-world feasibility and implementation challenges associated with be based on registry data,21,24,27,31 represent the typical design for
challenges for clinical practice. the use of the clinical risk prognostic studies in psychiatry.13,15
prediction model.
If the external accuracy of an existing model In case of unsatisfactory
is unsatisfactory, consider model updating accuracy, drop the model Selection of Predictors
(eg, recalibrating or refitting the model in and develop new ones Once the outcome and the statistical model are defined, the selec-
the validation sample or adding novel from scratch.
predictors36).a tion of robust and unambiguous12 predictors becomes the most chal-
Follow international guidelines for Ignore state-of-the-art lenging step. No statistical model can ultimately perform well if it is
researching clinical outcomes (PROGRESS methodological guidelines.
137), predictors (PROGRESS 241), model based on poor predictors. Given the massive multivariate and mul-
development, validation, impact evaluation timodal nature of mental disorders,40 multiple predictors (rather
(PROGRESS 336),b reporting (TRIPOD48),
and clinical use for treatment decision than a single one) need to be combined to estimate an absolute risk
making (PROGRESS 449). or probability that an outcome will occur.13,36,41 Automatic step-
Abbreviations: EPV, event per variable; PROGRESS, PROGgnosis RESearch wise (backward or forward) selection methods28 use the available
Strategy; TRIPOD, Transparent Reporting of a multivariable prediction model data to perform several tests, each time removing or adding a pre-
for Individual Prognosis or Diagnosis.
dictor to find the best predictors. A key problem with these meth-
a
When a model is updated in the same data, internal validity should be treated
ods is that the judgment of the analyst is taken out of the process.42
with caution.50
b Using the selected predictors, the final model is then estimated
For the external validation of Cox models, see Royston and Altman51; for
traditional and novel model performance measures see Steyerberg et al52; within the same data. Stepwise methods are not recommended for
and for interpreting external validation studies see Debray et al.53 prediction because they are affected by the testimation bias (esti-
mation of a clinical risk prediction model after tests for statistical sig-
overview illustrates with practical examples and methodologic nificance conducted in the same data; Table 1).43,54 This bias is par-
recommendations the key issues to producing good clinical risk ticularly severe in small samples, leads to poor clinical risk prediction
prediction models in psychiatry: model design, selection of predic- models, and is often overlooked.15 Notably, the effective sample size
tors, data preparation, model development, internal and external for binary predictions, such as the onset of psychotic disorders (see
validation, the role of incidence, model implementation, as well as Clinical Challenge2), is actually determined by the incidence of the
advanced estimation methods. We aim to improve researchers’, outcome (ie, number of psychotic disorder events; events per vari-
clinicians’, policy makers’, and research funders’ literacy on the able (EPV), Table 2).15
science of prognosis in psychiatry. A better approach is to prespecify models, selecting predic-
tors on the basis of a priori knowledge and using all of them in the
Theme 1: Model Design, Selection of Predictors, model (Table 1). The prognostic tool used in the Clinical Challenge2
Data Preparation, and Model Development was developed on the basis of a priori knowledge, which identified
Model Design strong prodromal features predating the onset of psychotic
Commonly, a specific clinical uncertainty motivates the develop- disorders.17 As indicated in Figure 1, several evidence synthesis meth-
ment of a clinical risk prediction model.36 A prediction model re- ods, such as systematic reviews or meta-analyses, can be used to
lates a number of characteristics (predictors) related to (1) the select a priori predictors. In particular, umbrella reviews (ie, re-
patient, (2) the disease, or (3) treatment and an outcome. The goal views of other systematic reviews and meta-analyses on a determi-
is to predict the likelihood/probability of a future outcome based on nate topic55) deliver one of the highest levels of evidence to stratify

E2 JAMA Psychiatry Published online October 17, 2018 (Reprinted) jamapsychiatry.com

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


The Science of Prognosis in Psychiatry Review Clinical Review & Education

the robustness of the association between several factors (eg, risk


Table 2. Clinical Risk Prediction Modeling Terms and Definitions
or protective factors) and outcomes (eg, onset of psychosis56), while
controlling for several biases.41 Notably, predicting outcomes is not Term Definition
Events The ratio between incidence of outcomes (events)
synonymous with explaining their cause.13,57 Every causal factor is per variable and df of predictors (variable)
likely a predictor, albeit sometimes a weak one, but not every pre- Accuracy The degree of closeness of predictions of an outcome
dictor is an etiopatological cause.13 Examples of prognostic but likely to that outcome’s true value
Prediction error The difference between the observed value and
noncausal a priori–selected psychiatric factors are sex, age,24 and the predicted value
various biomarkers.33 In practice, clinical risk prediction models Error due to bias The systematic difference between the expected model
developed on a priori psychiatric knowledge tend to adopt a prag- prediction and the correct value
Error due The variability of a model prediction for a given
matic approach58 and include predictors that are quite readily avail- to variance data point
able, not too costly, and can be measured with reasonable precision.15 Overfitting Making an overly complex clinical risk prediction model
Two such clinical risk prediction models have been recently pub- to fit idiosyncrasies in the data under study
Underfitting Making an overly simplistic clinical risk prediction model
lished for individuals at risk of psychosis, as introduced in the that does not fit the data under study
Clinical Challenge.2,23,24 Some rules of thumb have been sug- Overall How well the model fits the data (goodness of fit):
gested. For example, the recommended EPV (Table 2) ratio for performance the distance between the predicted outcome and
actual outcome
developing good a priori–defined clinical risk prediction models Discrimination The ability of the model to correctly separate those with
with binary outcomes is at least 2015; when the EPV is lower than from those without the outcome
20 but higher than 10, statistical corrections for overfitting, termed Calibration The agreement between observed outcomes and
predicted risks
shrinkage, are needed (see Table 1 and below). However, a priori Optimism The difference in a model’s performance in the derivation
knowledge on the predictors is not always available or it is often data and unseen individuals
based on univariate analyses that have not been validated. Data on Net benefit How a clinician or a patient weighs the relative harms
of false-positive and false-negative results
complex diseases such as psychiatric disorders are also frequently Apparent validity The extent to which the predictions fit the derivation data
high dimensional, with the number of variables being large com- Internal validity The extent to which the predictions fit the derivation data
pared with the available sample size and an EPV less than 10.15,59 after controlling for overfitting and optimism
Machine learning methods might be used,15 although a small sample External validity The extent to which the predictions can be generalized
to data from plausibly related settings
size also limits their validity (see Theme 5).44 Model updating Adjusting clinical risk prediction models to combine the
information captured in the original model with
information from new individuals or settings
Data Preparation
Incidence The probability of developing the outcome before the
Further steps involve data preparation, such as ensuring availabil- results of the prognostic assessment are known
ity of all predictors, coding the predictors and outcomes, and deal-
ing with missing data (Figure 1). These steps may introduce addi- ance (model’s inability to generalize outside the data; Table 2). Simple
tional biases and loss of efficiency in prediction.60 For example, models with few predictors typically have lower variance predic-
selection of an optimal cut point for a continuous predictor, exam- tions outside the data but higher bias because they may not cap-
ining different transformations of predictors or outcomes (the temp- ture important characteristics of the data (ie, underfit the data).
tation to convert continuous variables into categories should be When more predictors are added, models become more complex,
resisted60,61), examining different coding variants of a categorical enabling them to represent the training set more accurately with
predictor, merging or creating groups, or overlooking the problem lower bias. However, any data have some degree of random noise;
of missing data are all frequent causes of biases.15 Researchers are attempting to make the model conform too closely to slightly inac-
often not aware that such data preprocessing affects model design curate data can infect the model, despite the added complexity with
and results in biased models.50 higher variance (ie, overfit the data). This bias-variance tradeoff (or
dilemma) is a key issue in prediction research.39 An illustrative
Model Development example would be a linear regression model of a continuous pre-
To develop a clinical risk prediction model, empirical data from a dictor (over)fitted to a continuous outcome with just 2 data points.
sample of study individuals drawn from a population are used The fitted model would be the line connecting the 2 data points, and
(termed the derivation, development, or training data). The accu- its apparent accuracy would be perfect. However, when the model
racy of the prediction model is then usually measured through is externally validated in new data (termed validation or test data),
(1) overall performance (eg, measures of explained variability [R2, the overoptimistic estimate of its initial accuracy would be discov-
Brier score]52; Table 2), (2) discrimination (eg, sensitivity, specific- ered (Table 2). Optimism in a model’s (apparent) performance
ity, area under the curve, or Harrell C statistic52; Table 2), (3) cali- increases when the sample size, specifically the EPV, decreases.61
bration (eg, regression slope of prognostic index, calibration in the Both overfitting and underfitting increase the prediction error
large, calibration plots52; Table 2), and (4) clinical utility (net ben- and decrease the accuracy of clinical risk prediction models (Figure 1).
efits analyses52,62; Table 2). When the model is fitted to the devel- An optimal model compromises between the 2 aspects (Figure 1).
opment data, these measures indicate an apparent performance of
the model only (Table 2 and Figure 1). The apparent accuracy of the Theme 2: Internal and External Model Validation
model increases when more predictors are added (Figure 1). As Internal Validity
indicated in Figure 1, a model’s error in fitting the data (prediction To develop robust models, it is essential to estimate the model’s per-
error) can be deconstructed into error due to bias (model’s inability formance using internal validation methods to adjust for optimism
to accurately capture patterns in the data) and error due to vari- (Table 1).63 Internal validation is performed on the development data

jamapsychiatry.com (Reprinted) JAMA Psychiatry Published online October 17, 2018 E3

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


Clinical Review & Education Review The Science of Prognosis in Psychiatry

Figure 1. Essential Steps to Building Clinical Risk Prediction Models in Psychiatry

A Select predictors B Prepare data C Model development

y y y

Sec
Umbrella

ond
reviews x x x

ary

Model Prediction Error


Underfit Fit Overfit
res

Model Accuracy
Meta-analyses
ear
ch Variance
Systematic reviews Bias
Primarch
r es

Individual studies
e
ary

Model Complexity/No. of Predictors

D Internal validation E External validation F Implementation and impact


Leave-
Holdout K-fold K one-out
User
Model Prediction Error

Development Usability experience


External validation

Model Accuracy
data

Optimism
Bootstrap

Accuracy Accessibility
Sample Internal
validation Clinical
implementation
Apparent validation
Cost-
Model Ergonomics
effectiveness
Model Complexity/No. of Predictors

Complexity Marketing

Once the model design is finalized, predictors are preferably selected through a performance in new cases from the same population (optimism), internal
priori knowledge provided by evidence synthesis methods (A and B). Data are validation methods are used (D). The model’s accuracy is then tested in the
then cleaned and prepared (avoiding biases) (B), and the model is estimated in external validation data set (E). In the final step, the model is implemented in
the development data set, controlling for underfitting and overfitting problems, psychiatric routine, and its clinical impacts are evaluated (F).
which lead to poor clinical risk prediction models (C). To estimate the model’s

set by fitting the model in a training data set and then assessing per- differences in these accuracies for each bootstrap sample are then
formance in a test data set of unseen cases from the same underly- averaged, indexing the model’s optimism. This optimism is subse-
ing population. A classic approach is to randomly partition the data quently subtracted from the original estimate to obtain an optimism-
set into 2 sets, training and test (termed holdout; Figure 1). This is corrected estimate, which indexes how the model will perform
statistically inefficient because data are wasted (not all available in future new study participants from the same underlying
data are used to produce the model) and replication is unstable (in population.64,65 Internal validation, eg, with bootstrapping and
that different random splits give different results).61 Better alterna- 10-fold cross-validation,61 requires advanced statistical expertise
tives are offered by cross-validation and bootstrapping methods, and is increasingly used in psychiatry.27,30,31
termed resampling methods (Figure 1). In K-fold cross-validation,
K subsamples are created, 1 of them is retained as the validation data, External Validity
and the remaining K-1 subsamples are used as training data. The Internal validation does not quantify the degree of heterogeneity
cross-validation process is then repeated K times (the folds), and that will be encountered in real-life applications of the model. It is
model accuracy is then pooled from all test sets. Leave-one-out cross- therefore essential to confirm that any developed model also pre-
validation is K-fold cross-validation taken to its logical extreme, with dicts well in similar but different individuals outside the develop-
K equal to the number of data points. Bootstrapping is an attractive ment set (external validation; Table 1 and Table 2).66 To interpret the
method for internal validation. Bootstrapping involves repeatedly model’s performance in the context of external validation, it is
sampling from the development data set with replacement, form- important to quantify the degree of relatedness between the
ing a large number (eg, 100-500 times61) of bootstrap data sets, each development and validation samples (differences in case mix out-
of the same size as the original data. The idea is that the develop- comes and distribution of predictors53). Within certain limits,12,53 the
ment data take the place of the population of interest, and the boot- more the validation differs from the development sample, the stron-
strap samples represent samples from that population.15 When a ger the test of generalizability of the model.19 Importantly, external
model fitted using the bootstrap samples is applied to the develop- validation is not about refitting the model to the external data
ment data, its accuracy will be lower than its apparent accuracy. The and checking whether its performance is still good.51 External

E4 JAMA Psychiatry Published online October 17, 2018 (Reprinted) jamapsychiatry.com

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


The Science of Prognosis in Psychiatry Review Clinical Review & Education

Figure 2. Impact of Incidence on Positive Predictive Values (PPV) and Negative Predictive Values (NPV)

Positive predictive value Negative predictive value Random prognostic test (eg, tossing a coin)

A Help-seeking individuals B Individuals with 22q11.2 deletion syndrome


Help-seeking individuals 22q11.2 Psychiatric
referred to early General Deletion patients
detection services population syndrome in forensic units A, When the prognostic tool
1.0 1.0 presented in the Clinical Challenge2 is
used in standard samples of
Predictive Values of the Prognostic

Predictive Values of the Prognostic


0.83 help-seeking individuals accessing
0.8 0.8
psychosis early-detection services
(who have a 0.15-times risk of
developing psychosis, or incidence),
0.6 0.6
Test at 3 y

Test at 3 y
the 3-year PPV is 0.26 and the 3-year
NPV 0.02. B, When the same
prognostic tool is used in individuals
0.4 0.4
with 22q11.2 deletion syndrome,
similar PPV/NPV are observed; when
0.26 0.26 the tool is used in the general
0.2 0.2 0.2
population (lower incidence, blue
0.05
0.02
line) or in psychiatric patients
<0.01 0.02 admitted to forensic units (higher
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 incidence, orange line),
Incidence Incidence unsatisfactory PPV/NPV are
observed.77

validation is (1) taking the model with its predictors and assigned available for individuals at risk are described in the Clinical Challenge2
weights (eg, regression coefficients) as estimated from the devel- and elsewhere73). A better alternative to redeveloping new mod-
opment data and internally validated; (2) obtaining the measured els in each new patient sample is to update (Table 1 and Table 2)
predictors and outcomes in the new individuals; and (3) applying it existing models and adjust them to the local circumstances or
to the new individuals, quantifying its prognostic accuracy (overall setting of the validation sample.12,19,36 Adjustment or updating may
performance, discrimination, calibration, and clinical utility).19 New include recalibrating the model,19 refitting the model in the valida-
individuals may be from the same institution at a later time (tem- tion sample,19 or assessing the added value of a new predictor.61
poral external validation, eg, done through nonrandom splitting of Naturally, the validity of the updated model needs to be assessed
an existing data set by the moment of inclusion),28 from different again in a new data set.
sites (geographical external validation, eg, done through nonran-
dom splitting of existing study data by center or country),21,24-27,31,32 Theme 3: The Role of the Incidence
or in individuals very different from those in whom it was devel- The incidence of the psychiatric condition relates to the number of
oped (external domain or setting validation, eg, validating in pri- events and is often overlooked despite its influence on the real-
mary care a prediction model that was developed in secondary world external accuracy of clinical risk prediction models.4 For
care). External validation may also be carried out retrospectively, eg, example, the discriminative performance of the tool described in the
using registry data.67 External validation studies are extremely Clinical Challenge2 is frequently indexed by the positive predictive
valuable in psychiatry and are “all that matters.”36 As is the case in value (ie, how likely it is that an individual with the condition is cor-
biomedical science more broadly,68 psychiatry has a serious repli- rectly identified) and the negative predictive value (ie, how likely it is
cation crisis69 to the point that scientifically, replication becomes that an individual without the condition is correctly identified). These
equally or even more important than discovery.70 A recent system- values are sensitive to the incidence of the condition in the popula-
atic review of clinical risk prediction models for the individuals at tion of interest (eg, level of psychosis risk in the sample undergoing
risk of psychosis described in the Clinical Challenge2 uncovered 91 the test21,74-76) or across different clinical settings and can be esti-
studies, none of which performed a true external validation.71 Vali- mated using plots such as those in Figure 2.77 Overall, clinical risk
dation studies have equally been neglected in other psychiatric prediction models should be developed and externally validated in
disorders.72 Several guidelines are available to guide the external tandem with rigorous epidemiologic studies that ascertain the inci-
validation of clinical risk prediction models (Table 1).19,36,51 In the case dence of the psychiatric condition under investigation (Table 1).78
of binary outcomes, some authors recommend that at least 100
events should be included46 or sample size calculations for preci- Theme 4: Model Implementation
sion of estimates should be performed.47 The external accuracy is Clinical risk prediction models can affect the health of psychiatric
commonly lower than its internal accuracy (Figure 1).19 When the patients and cost effectiveness of care only when the predicted risks
external prognostic accuracy of the model is unsatisfactory, re- provided by the model change individuals’ or health care profes-
searchers usually tend to reject the model and develop a new one, sionals’ behavior and management decisions.12,79 This is studied in
leading to a loss of previous scientific information. This contrasts with model impact studies, which require a comparative design.12,79,80
the progressive knowledge framework of evidence-based A control group may be randomly assigned to usual care or man-
medicine.19 This could also leave clinicians with the impracticable agement without the use of predictions from the model, while in the
situation of having to decide which model to use (eg, competing tools intervention group, those predictions are made available to indi-

jamapsychiatry.com (Reprinted) JAMA Psychiatry Published online October 17, 2018 E5

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


Clinical Review & Education Review The Science of Prognosis in Psychiatry

Figure 3. Regularized Methods

50
Overfitted function
Y(x) = –x4 + 7x3–5x2–31x + 30
40

Shrinkage factor = 5
Statistics
30

20
Regularized methods are at the
Regularized methods 10 interface between statistics and

Y
machine learning and can perform
0 automatic selection of predictors and
regularization, minimizing overfitting
–10 problems. As an example, the 4 data
Machine Learning Regularized function points (in orange) were overfitted by
Y2(x) = x4/5–7x3/5 + x2 + 31x/5-6
–20 the green Y(x) function, which is then
regularized in the gray function.
–30 Regularization is achieved by dividing
–3 –2 –1 0 1 2 3 4 5 6 all of the parameters of the green
X function by a correction factors
termed shrinkage factor.

viduals and/or health care professionals to guide their behavior and factor, which results in the shrinkage of the regression coefficients
decision making.19 Availability of electronic health records that can of the clinical risk prediction model (Figure 3). Shrinkage prevents
automatically give predictions for individual patients in routine care overfittingandthusproducesbettermodels.39,89 Regularizationmeth-
may improve implementation and accordingly impact ods perform an automatic selection of predictors and have been used
analyses.49,81-83 Many more models are developed than are even- to predict the level of risk enrichment in people undergoing an
tually implemented or used in clinical practice,36,83 likely because assessment for psychosis risk (as in the Clinical Challenge2)21 or treat-
they are too complex: simplicity of models and reliability of ment response in depression35 (eAppendix in the Supplement).
measurements are important criteria for real-world use.13,36,84,85
Defining who will use, how to use, and when (in which clinical cir- Prognostics as a Fundamental Basis of Scientific Psychiatry
cumstances) to use clinical risk prediction models is essential Psychiatry differs from any other medical specialty in that anatomy
(Table 1). Prognostic models to be used at scale should be relatively and pathophysiology are not the cornerstone of its clinical
simple: clinical risk prediction models with a modest accuracy but knowledge.6 After about 2 centuries of biological research, to date,
with high face validity and broad usability can have a substantial there are few established etiopathological mechanisms and no
clinical impact.12,35 An example is a clinically based individualized reliable biomarkers for psychiatric disorders,88,90 which are still
model that is being implemented in the national health system in entirely classified on heterogenous symptoms or behaviors.40,91
the United Kingdom to automatically detect individuals at risk for Psychiatry is therefore the most Hippocratic medical discipline; in
psychosis at scale.67 Vice versa, more complex models can be psychiatry, there are no anatomical diseases (diagnosis) but only
reserved for subgroups of patients within a sequential testing individuals who fall ill (prognosis).6 Clinical decision making in
framework 49,86 (eg, to stratify participants into randomized psychiatry should therefore pragmatically embrace prognosis58
clinical trials23). beyond diagnosis, and clinicians will need to become more com-
fortable with the untidy methodological pluralism92 developing in
Theme 5: Advanced Prognostic Methods medicine, in which relevant knowledge comes from different sources
During recent years, advanced prognostic methods such as machine beyond diagnostic classification and in which the value of that
learning approaches have been used in psychiatry with the aim of im- knowledge varies from patient to patient.93 The Clinical Challenge2
proving clinical risk prediction (eg, persistence of depression or ad- and recent transdiagnostic clinical risk prediction models illustrate how
verse drug reaction29,34), in particular with high-dimensional data that patients can be classified into those who are at risk (or not) of devel-
characterize psychiatric disorders.40,87 While the statistical ap- oping psychosis, independent from their baseline diagnoses.24,94
proach formalizes the relationships between the data using On an educational level, this adds fuel to the idea that the science of
prespecified equations (statistical models), machine learning estimating uncertainty or risk, also termed prognostics, should take
develops models using self-learning automatized algorithms.87 a prominent place in psychiatric teaching and training.7,14,37 This field
Optimization of the algorithm is based on maximizing correct predic- will further grow with the advent of large neurobiologically based
tions of unseen data, using extensions of the described cross- data sets (big data95), which can be analyzed with increasingly
validation procedures. However, since no background theory is re- advanced statistical or machine learning techniques.
quired, black box algorithms may arise with difficult clinical
interpretability.88 If a large number of predictors is modeled, this may
also prevent implementability in psychiatric care.67 These caveats can
Conclusions
be overcome by regularized regression methods, which are at the
intersection of machine learning and statistics (Figure 3). Regulariza- Prognosis is a venerable component of clinical medicine and an
tion is a process of introducing a penalization, termed shrinkage essential component of a scientific psychiatry. By focusing on the

E6 JAMA Psychiatry Published online October 17, 2018 (Reprinted) jamapsychiatry.com

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


The Science of Prognosis in Psychiatry Review Clinical Review & Education

individual rather than on the disease, prognosis becomes the cor- tion model must provide valid and reliable estimates of the risks,
nerstone for shared decision-making approaches and personalized and the uptake of those estimates should pragmatically affect
medicine.49 To be useful for these purposes, a clinical risk predic- patient outcomes and care.

ARTICLE INFORMATION Research, Patient-Centered Outcomes Research 15. Steyerberg E. Clinical Prediction Models. New
Accepted for Publication: July 19, 2018. Institute, or the Department of Health. York, NY: Springer; 2009. doi:10.1007/978-0-387
-77244-8
Published Online: October 17, 2018. REFERENCES
doi:10.1001/jamapsychiatry.2018.2530 16. Adams ST, Leveson SH. Clinical prediction rules.
1. Hippocrates. The Book of Prognostics. Gloucester, BMJ. 2012;344:d8312. doi:10.1136/bmj.d8312
Author Affiliations: Early Psychosis: Interventions United Kingdom: Dodo Press; 2009.
and Clinical-detection (EPIC) Lab, Department of 17. Fusar-Poli P, Borgwardt S, Bechdolf A, et al. The
Psychosis Studies, Institute of Psychiatry, 2. Fusar-Poli P, Davies C, Bonoldi I. A case of a psychosis high-risk state: a comprehensive
Psychology and Neuroscience, King’s College college student presenting with mild mental health state-of-the-art review. JAMA Psychiatry. 2013;70
London, London, United Kingdom (Fusar-Poli); problems [published online October 17, 2018]. (1):107-120. doi:10.1001/jamapsychiatry.2013.269
OASIS Service, South London and Maudsley JAMA Psychiatry. doi:10.1001/jamapsychiatry.2018 18. Elwyn G, Frosch D, Thomson R, et al. Shared
National Health Service Foundation Trust, London, .2530 decision making: a model for clinical practice. J Gen
United Kingdom (Fusar-Poli); Department of Brain 3. Fusar-Poli P, Rocchetti M, Sardella A, et al. Intern Med. 2012;27(10):1361-1367. doi:10.1007
and Behavioral Sciences, University of Pavia, Pavia, Disorder, not just state of risk: meta-analysis of /s11606-012-2077-6
Italy (Fusar-Poli); Department of Medical Sciences, functioning and quality of life in people at high risk 19. Moons KG, Kengne AP, Grobbee DE, et al. Risk
Cardiology, and Uppsala Clinical Research Center, of psychosis. Br J Psychiatry. 2015;207(3):198-206. prediction models, II: external validation, model
Uppsala University, Uppsala University Hospital, doi:10.1192/bjp.bp.114.157115 updating, and impact assessment. Heart. 2012;98
Uppsala, Sweden (Hijazi); Department of 4. Fusar-Poli P, Schultze-Lutter F. Predicting the (9):691-698. doi:10.1136/heartjnl-2011-301247
Biostatistics and Health Informatics, Institute of onset of psychosis in patients at clinical high risk:
Psychiatry, Psychology and Neuroscience, King’s 20. Croft P, Altman DG, Deeks JJ, et al. The science
practical guide to probabilistic prognostic of clinical practice: disease diagnosis or patient
College London, London, United Kingdom (Stahl); reasoning. Evid Based Ment Health. 2016;19(1):
Department of Biomedical Data Sciences, Medical prognosis? evidence about “what is likely to
10-15. doi:10.1136/eb-2015-102295 happen” should shape clinical practice. BMC Med.
Statistics and Medical Decision Making, Leiden
University Medical Center, Leiden, the Netherlands 5. Fusar-Poli P. The Clinical High-Risk State for 2015;13:20. doi:10.1186/s12916-014-0265-4
(Steyerberg). Psychosis (CHR-P), Version II. Schizophr Bull. 2017; 21. Fusar-Poli P, Rutigliano G, Stahl D, et al.
43(1):44-47. doi:10.1093/schbul/sbw158 Deconstructing pretest risk enrichment to optimize
Author Contributions: Dr Fusar-Poli had full access
to all of the data in the study and takes 6. Pagel W. Prognosis and diagnosis: a comparison prediction of psychosis in individuals at clinical high
responsibility for the integrity of the data and the of ancient and modern medicine. J Warburg Inst. risk. JAMA Psychiatry. 2016;73(12):1260-1267.
accuracy of the data analysis. Drs Stahl and 1939;2(4):382-398. doi:10.2307/750046 doi:10.1001/jamapsychiatry.2016.2707
Steyerberg served as co–last authors and 7. Croft P, Dinant GJ, Coventry P, Barraclough K. 22. Oliver D, Kotlicka-Antczak M, Minichino A,
contributed equally to the work. Looking to the future: should ‘prognosis’ be heard Spada G, McGuire P, Fusar-Poli P. Meta-analytical
Concept and design: All authors. as often as ‘diagnosis’ in medical education? Educ prognostic accuracy of the Comprehensive
Acquisition, analysis, or interpretation of data: Prim Care. 2015;26(6):367-371. doi:10.1080 Assessment of At Risk Mental States (CAARMS): the
Fusar-Poli. /14739879.2015.1101863 need for refined prediction. Eur Psychiatry. 2018;
Drafting of the manuscript: Fusar-Poli. 8. van den Tweel JG, Taylor CR. A brief history of 49:62-68. doi:10.1016/j.eurpsy.2017.10.001
Critical revision of the manuscript for important pathology: preface to a forthcoming series that 23. Cannon TD, Yu C, Addington J, et al. An
intellectual content: All authors. highlights milestones in the evolution of pathology individualized risk calculator for research in
Statistical analysis: Fusar-Poli, Stahl. as a discipline. Virchows Arch. 2010;457(1):3-10. prodromal psychosis. Am J Psychiatry. 2016;173
Obtained funding: Fusar-Poli. doi:10.1007/s00428-010-0934-4 (10):980-988.
Administrative, technical, or material support:
Fusar-Poli. 9. Fusar-Poli P, Cappucciati M, Borgwardt S, et al. 24. Fusar-Poli P, Rutigliano G, Stahl D, et al.
Supervision: Fusar-Poli, Stahl. Heterogeneity of psychosis risk within individuals at Development and validation of a clinically based
clinical high risk: a meta-analytical stratification. risk calculator for the transdiagnostic prediction of
Conflict of Interest Disclosures: Dr Fusar-Poli has JAMA Psychiatry. 2016;73(2):113-120. doi:10.1001 psychosis. JAMA Psychiatry. 2017;74(5):493-500.
served on the advisory board for Lundbeck. /jamapsychiatry.2015.2324 doi:10.1001/jamapsychiatry.2017.0284
No other disclosures are reported.
10. Fusar-Poli P, Cappucciati M, De Micheli A, et al. 25. Carrión RE, Cornblatt BA, Burton CZ, et al.
Funding/Support: This work is supported in part Diagnostic and prognostic significance of brief Personalized prediction of psychosis: external
by a King’s College London Confidence in Concept limited intermittent psychotic symptoms (BLIPS) in validation of the NAPLS-2 psychosis risk calculator
award (MC_PC_16048) from the Medical Research individuals at ultra high risk. Schizophr Bull. 2017; with the EDIPPP project. Am J Psychiatry. 2016;173
Council (Dr Fusar-Poli). Dr Stahl was partly funded 43(1):48-56. doi:10.1093/schbul/sbw151 (10):989-996. doi:10.1176/appi.ajp.2016.15121565
by the National Institute for Health Research
Biomedical Research Centre at South London and 11. Fusar-Poli P, Cappucciati M, Bonoldi I, et al. 26. Nigatu YT, Liu Y, Wang J. External validation of
Maudsley National Health Service Foundation Trust Prognosis of brief psychotic episodes: the international risk prediction algorithm for major
and King’s College London. Dr Steyerberg was a meta-analysis. JAMA Psychiatry. 2016;73(3):211-220. depressive episode in the US general population:
partially supported through a Patient-Centered doi:10.1001/jamapsychiatry.2015.2313 the PredictD-US study. BMC Psychiatry. 2016;16:256.
Outcomes Research Institute award (ME-1606- 12. Moons KG, Altman DG, Vergouwe Y, Royston P. doi:10.1186/s12888-016-0971-x
35555). Prognosis and prognostic research: application and 27. Fernandez A, Salvador-Carulla L, Choi I, Calvo R,
Role of the Funder/Sponsor: The funders had no impact of prognostic models in clinical practice. BMJ. Harvey SB, Glozier N. Development and validation
role in the design and conduct of the study; 2009;338:b606. doi:10.1136/bmj.b606 of a prediction algorithm for the onset of common
collection, management, analysis, and 13. Moons KG, Royston P, Vergouwe Y, Grobbee mental disorders in a working population. Aust N Z J
interpretation of the data; preparation, review, or DE, Altman DG. Prognosis and prognostic research: Psychiatry. 2018;52(1):47-58. doi:10.1177
approval of the manuscript; and decision to submit what, why, and how? BMJ. 2009;338:b375. /0004867417704506
the manuscript for publication. doi:10.1136/bmj.b375 28. Angstman KB, Garrison GM, Gonzalez CA,
Disclaimer: The views expressed are those of the 14. Fraser-Darling A. The art and science of Cozine DW, Cozine EW, Katzelnick DJ. Prediction of
authors and not necessarily those of the National prognosis in general practice. J Coll Gen Pract Res primary care depression outcomes at six months:
Health Service, the National Institute for Health Newsl. 1958;1(2):129-140. validation of DOC-6 ©. J Am Board Fam Med. 2017;
30(3):281-287. doi:10.3122/jabfm.2017.03.160313

jamapsychiatry.com (Reprinted) JAMA Psychiatry Published online October 17, 2018 E7

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


Clinical Review & Education Review The Science of Prognosis in Psychiatry

29. Kessler RC, van Loo HM, Wardenaar KJ, et al. 43. Steyerberg EW, Eijkemans MJ, Habbema JD. 58. Paulus MP. Evidence-based pragmatic
Testing a machine-learning algorithm to predict the Stepwise selection in small data sets: a simulation psychiatry: a call to action. JAMA Psychiatry. 2017;
persistence and severity of major depressive study of bias in logistic regression analysis. J Clin 74(12):1185-1186. doi:10.1001/jamapsychiatry.2017
disorder from baseline self-reports. Mol Psychiatry. Epidemiol. 1999;52(10):935-942. doi:10.1016 .2439
2016;21(10):1366-1371. doi:10.1038/mp.2015.198 /S0895-4356(99)00103-1 59. Peduzzi P, Concato J, Kemper E, Holford TR,
30. Maarsingh OR, Heymans MW, Verhaak PF, 44. Pavlou M, Ambler G, Seaman SR, et al. How to Feinstein AR. A simulation study of the number of
Penninx BWJH, Comijs HC. Development and develop a more accurate risk prediction model events per variable in logistic regression analysis.
external validation of a prediction rule for an when there are few events. BMJ. 2015;351:h3868. J Clin Epidemiol. 1996;49(12):1373-1379. doi:10.1016
unfavorable course of late-life depression: doi:10.1136/bmj.h3868 /S0895-4356(96)00236-3
a multicenter cohort study. J Affect Disord. 2018; 45. Ogundimu EO, Altman DG, Collins GS. 60. Taylor J, Yu M. Bias and efficiency loss due to
235:105-113. doi:10.1016/j.jad.2018.04.026 Adequate sample size for developing prediction categorizing an explanatory variable. J Multivariate
31. Fazel S, Wolf A, Larsson H, Lichtenstein P, models is not simply related to events per variable. Anal. 2002;83(1):248-263. doi:10.1006/jmva.2001
Mallett S, Fanshawe TR. Identification of low risk of J Clin Epidemiol. 2016;76:175-182. doi:10.1016 .2045
violent crime in severe mental illness with a clinical /j.jclinepi.2016.02.031 61. Moons KG, Kengne AP, Woodward M, et al. Risk
prediction tool (Oxford Mental Illness and Violence 46. Collins GS, Ogundimu EO, Altman DG. Sample prediction models: I: development, internal
tool [OxMIV]): a derivation and validation study. size considerations for the external validation of a validation, and assessing the incremental value of a
Lancet Psychiatry. 2017;4(6):461-468. doi:10.1016 multivariable prognostic model: a resampling study. new (bio)marker. Heart. 2012;98(9):683-690.
/S2215-0366(17)30109-8 Stat Med. 2016;35(2):214-226. doi:10.1002/sim.6787 doi:10.1136/heartjnl-2011-301246
32. Liu Y, Sareen J, Bolton JM, Wang JL. 47. Hajian-Tilaki K. Sample size estimation in 62. Vickers AJ, Van Calster B, Steyerberg EW. Net
Development and validation of a risk prediction diagnostic test studies of biomedical informatics. benefit approaches to the evaluation of prediction
algorithm for the recurrence of suicidal ideation J Biomed Inform. 2014;48:193-204. doi:10.1016 models, molecular markers, and diagnostic tests. BMJ.
among general population with low mood. J Affect /j.jbi.2014.02.013 2016;352:i6. doi:10.1136/bmj.i6
Disord. 2016;193:11-17. doi:10.1016/j.jad.2015.12.072
48. Collins GS, Reitsma JB, Altman DG, Moons KG. 63. Steyerberg EW, Harrell FE Jr. Prediction models
33. Levey DF, Niculescu EM, Le-Niculescu H, et al. Transparent Reporting of a multivariable prediction need appropriate internal, internal-external, and
Towards understanding and predicting suicidality in model for Individual Prognosis or Diagnosis external validation. J Clin Epidemiol. 2016;69:245-
women: biomarkers and clinical risk assessment. (TRIPOD): the TRIPOD statement. Ann Intern Med. 247. doi:10.1016/j.jclinepi.2015.04.005
Mol Psychiatry. 2016;21(6):768-785. doi:10.1038 2015;162(1):55-63. doi:10.7326/M14-0697
/mp.2016.31 64. Efron B. Bootstrap methods: another look at
49. Hingorani AD, Windt DA, Riley RD, et al; the jackknife. Ann Stat. 1979;7(1):1-26. doi:10.1214
34. Bean DM, Wu H, Iqbal E, et al. Knowledge PROGRESS Group. Prognosis research strategy /aos/1176344552
graph prediction of unknown adverse drug (PROGRESS) 4: stratified medicine research. BMJ.
reactions and validation in electronic health 65. Harrell FE Jr, Lee KL, Mark DB. Multivariable
2013;346:e5793. doi:10.1136/bmj.e5793 prognostic models: issues in developing models,
records. Sci Rep. 2017;7(1):16416. doi:10.1038
/s41598-017-16674-x 50. Stahl D, Pickles A. Fact or fiction: reducing the evaluating assumptions and adequacy, and
proportion and impact of false positives. Psychol Med. measuring and reducing errors. Stat Med. 1996;15
35. Chekroud AM, Zotti RJ, Shehzad Z, et al. 2018;48(7):1084-1091. doi:10.1017 (4):361-387. doi:10.1002/(SICI)1097-0258(19960229)
Cross-trial prediction of treatment outcome in /S003329171700294X 15:4<361::AID-SIM168>3.0.CO;2-4
depression: a machine learning approach. Lancet
Psychiatry. 2016;3(3):243-250. doi:10.1016/S2215- 51. Royston P, Altman DG. External validation of a 66. Justice AC, Covinsky KE, Berlin JA. Assessing
0366(15)00471-X Cox prognostic model: principles and methods. the generalizability of prognostic information. Ann
BMC Med Res Methodol. 2013;13:33. doi:10.1186 Intern Med. 1999;130(6):515-524. doi:10.7326
36. Steyerberg EW, Moons KG, van der Windt DA, /1471-2288-13-33 /0003-4819-130-6-199903160-00016
et al; PROGRESS Group. Prognosis Research
Strategy (PROGRESS) 3: prognostic model 52. Steyerberg EW, Vickers AJ, Cook NR, et al. 67. Fusar-Poli P, Werbeloff N, Rutigliano G, et al.
research. PLoS Med. 2013;10(2):e1001381. Assessing the performance of prediction models: Transdiagnostic risk calculator for the automatic
doi:10.1371/journal.pmed.1001381 a framework for traditional and novel measures. detection of individuals at risk and the prediction of
Epidemiology. 2010;21(1):128-138. doi:10.1097/EDE psychosis: second replication in an independent
37. Hemingway H, Croft P, Perel P, et al; PROGRESS .0b013e3181c30fb2 National Health Service Trust [published online
Group. Prognosis research strategy (PROGRESS) 1: June 12, 2018]. Schizophr Bull. doi:10.1093/schbul
a framework for researching clinical outcomes. BMJ. 53. Debray TP, Vergouwe Y, Koffijberg H, Nieboer
D, Steyerberg EW, Moons KG. A new framework to /sby070
2013;346:e5595. doi:10.1136/bmj.e5595
enhance the interpretation of external validation 68. Ioannidis JPA, Khoury MJ. Improving validation
38. Adolfsson J, Steineck G. Prognostic and studies of clinical prediction models. J Clin Epidemiol. practices in “omics” research. Science. 2011;334
treatment-predictive factors-is there a difference? 2015;68(3):279-289. doi:10.1016/j.jclinepi.2014.06 (6060):1230-1232. doi:10.1126/science.1211811
Prostate Cancer Prostatic Dis. 2000;3(4):265-268. .018
doi:10.1038/sj.pcan.4500490 69. Szucs D, Ioannidis JP. Empirical assessment of
54. Harrell F. Regression Modeling Strategies: With published effect sizes and power in the recent
39. Hastie T, Tibshirani R, Friedman J. The Elements Applications to Linear Models, Logistic and Ordinal cognitive neuroscience and psychology literature.
of Statistical Learning: Data Mining, Inference, and Regression, and Survival Analysis. 2nd ed. Cham, PLoS Biol. 2017;15(3):e2000797. doi:10.1371/journal
Prediction. 2nd ed. New York, NY: Springer; 2009. Switzerland: Springer; 2015. doi:10.1007 .pbio.2000797
doi:10.1007/978-0-387-84858-7 /978-3-319-19425-7 70. Ioannidis JPA. Evolution and translation of
40. Hahn T, Nierenberg AA, Whitfield-Gabrieli S. 55. Fusar-Poli P, Radua J. Ten simple rules for research findings: from bench to where? PLoS Clin
Predictive analytics in mental health: applications, conducting umbrella reviews. Evid Based Ment Trials. 2006;1(7):e36. doi:10.1371/journal.pctr
guidelines, challenges and perspectives. Mol Health. 2018;21(3):95-100. doi:10.1136 .0010036
Psychiatry. 2017;22(1):37-43. doi:10.1038 /ebmental-2018-300014
/mp.2016.201 71. Studerus E, Ramyead A, Riecher-Rössler A.
56. Radua J, Ramella-Cravaro V, Ioannidis JPA, Prediction of transition to psychosis in patients with
41. Riley RD, Hayden JA, Steyerberg EW, et al; et al. What causes psychosis? an umbrella review of a clinical high risk for psychosis: a systematic review
PROGRESS Group. Prognosis Research Strategy risk and protective factors. World Psychiatry. 2018; of methodology and reporting. Psychol Med. 2017;
(PROGRESS) 2: prognostic factor research. PLoS Med. 17(1):49-66. doi:10.1002/wps.20490 47(7):1163-1178. doi:10.1017/S0033291716003494
2013;10(2):e1001380. doi:10.1371/journal.pmed
.1001380 57. Brotman DJ, Walker E, Lauer MS, O’Brien RG. 72. Bernardini F, Attademo L, Cleary SD, et al. Risk
In search of fewer independent risk factors. Arch prediction models in psychiatry: toward a new
42. Hosmer W, Lemeshow S. Applied Survival Intern Med. 2005;165(2):138-145. doi:10.1001 frontier for the prevention of mental illnesses. J Clin
Analysis: Regression Modeling of Time to Event Data. /archinte.165.2.138 Psychiatry. 2017;78(5):572-583. doi:10.4088/JCP
New York, NY: Wiley & Sons; 1999. .15r10003

E8 JAMA Psychiatry Published online October 17, 2018 (Reprinted) jamapsychiatry.com

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018


The Science of Prognosis in Psychiatry Review Clinical Review & Education

73. Fusar-Poli P, Cappucciati M, Rutigliano G, et al. 80. Wallace E, Smith SM, Perera-Salazar R, et al; 88. Fusar-Poli P, Meyer-Lindenberg A. Forty years
Towards a standard psychometric diagnostic International Diagnostic and Prognosis Prediction of structural imaging in psychosis: promises and
interview for subjects at ultra high risk of psychosis: (IDAPP) group. Framework for the impact analysis truth. Acta Psychiatr Scand. 2016;134(3):207-224.
CAARMS versus SIPS. Psychiatry J. 2016;2016: and implementation of clinical prediction rules doi:10.1111/acps.12619
7146341. doi:10.1155/2016/7146341 (CPRs). BMC Med Inform Decis Mak. 2011;11:62. 89. James G, Hastie T, Tibshirani R, Witten D.
74. Fusar-Poli P, Schultze-Lutter F, Cappucciati M, doi:10.1186/1472-6947-11-62 An Introduction to Statistical Learning: With
et al. The dark side of the moon: meta-analytical 81. Kawamoto K, Houlihan CA, Balas EA, Lobach Applications in R. Springer; 2013. doi:10.1007
impact of recruitment strategies on risk enrichment DF. Improving clinical practice using clinical decision /978-1-4614-7138-7
in the clinical high risk state for psychosis. Schizophr support systems: a systematic review of trials to 90. Kapur S, Phillips AG, Insel TR. Why has it taken
Bull. 2016;42(3):732-743. doi:10.1093/schbul/sbv162 identify features critical to success. BMJ. 2005;330 so long for biological psychiatry to develop clinical
75. Fusar-Poli P, Schultze-Lutter F, Addington J. (7494):765. doi:10.1136/bmj.38398.500764.8F tests and what to do about it? Mol Psychiatry. 2012;
Intensive community outreach for those at ultra 82. James BC. Making it easy to do it right. N Engl J 17(12):1174-1179. doi:10.1038/mp.2012.105
high risk of psychosis: dilution, not solution. Lancet Med. 2001;345(13):991-993. doi:10.1056 91. Maj M. Why the clinical utility of diagnostic
Psychiatry. 2016;3(1):18. doi:10.1016/S2215-0366 /NEJM200109273451311 categories in psychiatry is intrinsically limited and
(15)00491-5 83. Chekroud AM, Koutsouleris N. The perilous how we can use new approaches to complement
76. Fusar-Poli P. Why ultra high risk criteria for path from publication to practice. Mol Psychiatry. them. World Psychiatry. 2018;17(2):121-122.
psychosis prediction do not work well outside 2018;23(1):24-25. doi:10.1038/mp.2017.227 doi:10.1002/wps.20512
clinical samples and what to do about it. World 84. Wyatt J, Altman DG. Commentary: prognostic 92. Solomon M. Making medical knowledge. England,
Psychiatry. 2017;16(2):212-213. doi:10.1002/wps models: clinically useful or quickly forgotten? BMJ. UK: Oxford University Press; 2015. doi:10.1093
.20405 1995;(311):1539-1541. doi:10.1136/bmj.311.7019.1539 /acprof:oso/9780198732617.001.0001
77. Fusar-Poli P, Cappucciati M, Rutigliano G, et al. 85. Altman DG, Vergouwe Y, Royston P, Moons KG. 93. Tonelli MR, Shirts BH. Knowledge for precision
At risk or not at risk? a meta-analysis of the Prognosis and prognostic research: validating a medicine: mechanistic reasoning and
prognostic accuracy of psychometric interviews for prognostic model. BMJ. 2009;338:b605. methodological pluralism. JAMA. 2017;318(17):
psychosis prediction. World Psychiatry. 2015;14(3): doi:10.1136/bmj.b605 1649-1650. doi:10.1001/jama.2017.11914
322-332. doi:10.1002/wps.20250
86. Schmidt A, Cappucciati M, Radua J, et al. 94. Fusar-Poli P, Nelson B, Valmaggia L, Yung AR,
78. Abu-Akel A, Bousman C, Skafidas E, Pantelis C. Improving prognostic accuracy in subjects at clinical McGuire PK. Comorbid depressive and anxiety
Mind the prevalence rate: overestimating the high risk for psychosis: systematic review of disorders in 509 individuals with an at-risk mental
clinical utility of psychiatric diagnostic classifiers. predictive models and meta-analytical sequential state: impact on psychopathology and transition to
Psychol Med. 2018;48(8):1225-1227. doi:10.1017 testing simulation. Schizophr Bull. 2017;43(2): psychosis. Schizophr Bull. 2014;40(1):120-131.
/S0033291718000673 375-388. doi:10.1093/schbul/sbs136
79. Reilly BM, Evans AT. Translating clinical 87. Dwyer DB, Falkai P, Koutsouleris N. Machine 95. Chekroud AM. Bigger data, harder questions:
research into clinical practice: impact of using learning approaches for clinical psychology and opportunities throughout mental health care. JAMA
prediction rules to make decisions. Ann Intern Med. psychiatry. Annu Rev Clin Psychol. 2018;14:91-118. Psychiatry. 2017;74(12):1183-1184. doi:10.1001
2006;144(3):201-209. doi:10.7326 doi:10.1146/annurev-clinpsy-032816-045037 /jamapsychiatry.2017.3333
/0003-4819-144-3-200602070-00009

jamapsychiatry.com (Reprinted) JAMA Psychiatry Published online October 17, 2018 E9

© 2018 American Medical Association. All rights reserved.

Downloaded From: by a Newcastle University User on 10/18/2018

You might also like