Skip to content
#

model-evaluation-metrics

Here are 8 public repositories matching this topic...

A credit risk text classification pipeline designed to simulate real-world modeling workflows. This project uses financial text data to predict borrower risk, incorporating data cleaning, NLP preprocessing, and model evaluation—emphasizing skills in feature engineering, model pipeline structuring, and explainable machine learning.

  • Updated May 4, 2025
  • Python

🗣️ Speech Type Detection is a Flask app to classifies text into categories like "Hate Speech," "Offensive Language," or "No Hate or Offensive Language" with 87.3% accuracy. It offers a user-friendly interface for text input and prediction, using machine learning algorithms. Idea for managing online inappropriate language. 🌐🔍.

  • Updated Jun 5, 2024
  • Python

A machine learning-based fake news detection system that classifies news articles as "FAKE" or "REAL" using Naive Bayes and Support Vector Machine (SVM) models. The project features a text preprocessing pipeline, model evaluation, and prediction capabilities, demonstrating practical accuracy and efficiency for real-world news verification.

  • Updated May 13, 2025
  • Python

Improve this page

Add a description, image, and links to the model-evaluation-metrics topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the model-evaluation-metrics topic, visit your repo's landing page and select "manage topics."

Learn more