About 779,000 results
Open links in new tab
  1. Probability threshold in ROC curve analyses - Cross Validated

    Nov 11, 2023 · But, the ROC curve is often plotted, computed, based on varying the cutoff-value. (That's how I made the graph above, change the cutoff value and for each value compute false/true positive rates). Then, if you select a certain point on the ROC curve for the ideal cutoff, then you can just lookup which cutoff value/criterium created that point ...

  2. sklearn 中 roc_curve() 函数使用方法是什么? - 知乎

    sklearn.metrics.roc_curve() 函数是用于计算二分类问题中的接收者操作特征曲线(ROC 曲线)以及对应的阈值。ROC 曲线是以假阳性率(False Positive Rate, FPR)为横轴,真阳性率(True Positive Rate, TPR)为纵轴,绘制的分类器性能曲线。

  3. How to determine the best model based on ROC curves

    Apr 21, 2020 · Checking the results through the corresponding metrics will result in a different point in the curve. The entire ROC curve of a classifier will be obtained by repeating the above, that is, computing the TPR and FPR , on different thresholds until you get a line describing the general behaviour of the classifier.

  4. regression - How to interpret a ROC curve? - Cross Validated

    Nov 30, 2014 · The area under the ROC-curve is a measure of the total discriminative performance of a two-class classifier, for any given prior probability distribution. Note that a specific classifier can perform really well in one part of the ROC-curve but show a poor discriminative ability in a different part of the ROC-curve.

  5. Why does my ROC curve look like this (is it correct?)

    Computing an ROC curve is done based on the ranking produced by your classifier (e.g. your logistic regression model). Use the model to predict every single test point once. You'll get a vector of confidence scores, let's call it $\mathbf{\hat{Y}}$. Using this vector you can produce the full ROC curve (or atleast an estimate thereof).

  6. ROC and multiROC analysis: how to calculate optimal cutpoint?

    I'm trying to understand how to compute the optimal cut-point for a ROC curve (the value at which the sensitivity and specificity are maximized). I'm using the dataset aSAH from the package pROC. The outcome variable could be explained by two independent variables: s100b and ndka. Using the syntax of the Epi package, I've created two models:

  7. ROC vs precision-and-recall curves - Cross Validated

    Now we see the problem with the ROC curve: for any search system, a recall (i.e. true positive rate) of 1 is reached for a very small false negative rate (before even 1% of negatives are misclassified as positive), and so the ROC curve (which plots recall against the false negative rate) almost immediately shoots up to 1.

  8. machine learning - How to determine the optimal threshold for a ...

    Nov 8, 2014 · What does a point on ROC curve tells us, or if I have a ROC curve and I have taken a point like (0.4,0.8) (fpr,tpr) tells us? 3 Optimal classifier or optimal threshold for scoring

  9. Where in the ROC curve does it tell you what the threshold is?

    May 4, 2023 · The ROC curve tells them what are the business implications of picking a given threshold. If there is a threshold value that suits their needs they will use the model. Determining the threshold involves quantifying several items that depend on the application.

  10. r - Understanding ROC curve - Cross Validated

    The further our ROC curve is above the line, the better. Area Under ROC. The area under the ROC Curve (shaded) naturally shows how far the curve from the base line. For the baseline it's 0.5, and for the perfect classifier it's 1. You can read more about AUC ROC in this question: What does AUC stand for and what is it? Selecting the Best Threshold

Refresh