An advanced hybrid recommendation system that combines collaborative filtering and content-based filtering approaches, enhanced with temporal awareness and contextual personalization
-
Updated
Mar 19, 2026 - Python
An advanced hybrid recommendation system that combines collaborative filtering and content-based filtering approaches, enhanced with temporal awareness and contextual personalization
Spider - web crawler and local wordlist processor to generate frequency sorted wordlist / ngrams
Go metrics for calculating string similarity and other string utility functions
Demonstrates, at very small scale, how a language model is trained (bigram language model).
Demonstrates, at very small scale, how a language model is trained (bigram language model).
Demonstrates, at very small scale, how a language model is trained (unigram language model).
Demonstrates, at very small scale, how a language model is trained (unigram language model).
A statistical 5-gram language model implemented in Python, trained on Sherlock Holmes stories by Arthur Conan Doyle (Project Gutenberg) to generate text in the author’s writing style.
^[S]\s$NLP(^\[Super\]\s$Natural Language Processing) Note. https://en.wikipedia.org/wiki/Natural_language_processing
A fluency_based evaluation tool for Chinese Grammatical Error Correction (CGEC)
A minimum viable Markov gibberish generator in 32 lines of Python, inspired by the legendary Mark V. Shaney program of 1980s
High-performance hybrid search engine for PHP. submodule@https://github.com/umaarov/goat-dev
Fuzzy / approximate string similarity metrics for PHP, JavaScript, Python
sms spam classification project(svm, tf-idf, n-gram)
Hybrid Hangman-solving agent combining Hidden Markov Models and Reinforcement Learning for intelligent, probabilistic word guessing.
3 AI projects: Search Algorithms (DFS, Bidirectional), LRTA* Pathfinding, and NLP Trigram Model in Python & Java
AI-powered Chrome Extension that autocompletes your emails in real time using an N-Gram Language Model — built with FastAPI and Python. Compose smarter and faster, locally and privately.
NLP Text Classification & Language Modeling: From-scratch implementations of Naive Bayes and Logistic Regression for emotion classification, plus N-gram models for text generation. Built with NumPy and compared against scikit-learn. Features custom evaluation metrics, confusion matrices, and comprehensive visualizations.
Built statistical N-gram language models from scratch to explore tokenization, training, probability modeling, and text generation; achieved a 97.6% perplexity improvement from unigram to 4-gram models.
Add a description, image, and links to the n-gram topic page so that developers can more easily learn about it.
To associate your repository with the n-gram topic, visit your repo's landing page and select "manage topics."