Skip to content
#

word-embeddings

Here are 18 public repositories matching this topic...

The repository contains notebooks created for collecting and preprocessing the corpus of diary entries and for experiments on creating models for predicting gender, age groups of authors and the time period of text creation.

  • Updated Jun 12, 2024
  • Jupyter Notebook

In this notebook, we will explore how to use pre-trained transformer models from the Hugging Face library, focusing on making them work for Natural Language Processing (NLP) tasks. Pre-trained models, such as BERT (Bidirectional Encoder Representations from Transformers), are powerful tools that save us from training models from scratch

  • Updated Dec 31, 2024
  • Jupyter Notebook

A comprehensive set of Jupyter notebooks that take you from NLP fundamentals to advanced techniques. Covers text preprocessing, POS tagging, NER, sentiment analysis (with VADER), text classification, word embeddings, and transformer models like BERT. Built with real-world datasets using NLTK, spaCy, scikit-learn, and Hugging Face Transformers.

  • Updated Oct 11, 2025
  • Python

Ipython Notebooks for solving problems like classification, segmentation, generation using latest Deep learning algorithms on different publicly available text and image data-sets.

  • Updated Aug 9, 2019
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the word-embeddings topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the word-embeddings topic, visit your repo's landing page and select "manage topics."

Learn more