Using MapReduce to calculate Wikipedia page rank; preventing dead-ends and spider-traps
-
Updated
Aug 29, 2017 - Java
Using MapReduce to calculate Wikipedia page rank; preventing dead-ends and spider-traps
CAB: power-Capping aware resource manager for Approximate Big data processing
Mirror of Apache SystemML (Incubating)
Un progetto di confronto tra HADOOP, SPARK e HIVE su query simili per analisi distribuite su un dataset in formato CSV relativo a recensioni di prodotti gastronomici Amazon
● Performed sequential and parallel analysis on the Wikipedia page-view logs to analyze page-view trends and derive the total average page views per day, top trending topics etc
Basic Hadoop Map-Reduce Programs
This repo is a part of Big Data Processing and Analytics course project that consist statistical operations on the application using Hadoop MapReduce framework.
Simple inverted indexing algorithm implemented with Hadoop
Implementation of the MapReduce PageRank algorithm using the Hadoop framework in Java (developed for Cloud Computing course)
In this task, we had to calculate the average temperature for each year from the given dataset using Hadoop HDFS. We had to create a MapReduce function to perform this task.
In this task, we had to write a MapReduce program to analyze the sentiment of a keyword from a list of comments. This was done using Hadoop HDFS.
Add a description, image, and links to the hadoop topic page so that developers can more easily learn about it.
To associate your repository with the hadoop topic, visit your repo's landing page and select "manage topics."