100 Pandas Puzzles
100 Pandas Puzzles
Inspired by 100 Numpy exerises, here are 100* short puzzles for testing your knowledge
of pandas' power.
Since pandas is a large library with many different specialist features and functions, these excercises
focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning),
making use of the core DataFrame and Series objects.
Many of the excerises here are stright-forward in that the solutions require no more than a few lines
of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods
and following best practices is the underlying goal.
The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are
subjective, of course, but should be a seen as a rough guide as to how inventive the required
solution is.
If you're just starting out with pandas and you are looking for some other resources, the official
documentation is very extensive. In particular, some good places get a broader overview of pandas
are...
• 10 minutes to pandas
• pandas basics
• tutorials
• cookbook and idioms
* the list of exercises is not yet complete! Pull requests or suggestions for additional exercises,
corrections and improvements are welcomed.
Importing pandas
Difficulty: easy
3. Print out all the version information of the libraries that are required by the pandas library.
pd.show_versions()
DataFrame basics
Difficulty: easy
import numpy as np
Consider the following Python dictionary data and Python list labels:
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat',
'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no',
'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
(This is just some meaningless data I made up with the theme of animals and trips to a vet.)
4. Create a DataFrame df from this dictionary data which has the index labels.
import numpy as np
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df = pd.DataFrame(data, index=labels)
5. Display a summary of the basic information about this DataFrame and its data (hint: there is a
single method that can be called on the DataFrame).
df.info()
# ...or...
df.describe()
# or equivalently
df.head(3)
7. Select just the 'animal' and 'age' columns from the DataFrame df.
df.loc[:, ['animal', 'age']]
# or
df[['animal', 'age']]
9. Select only the rows where the number of visits is greater than 3.
df[df['visits'] > 3]
10. Select the rows where the age is missing, i.e. it is NaN.
df[df['age'].isnull()]
11. Select the rows where the animal is a cat and the age is less than 3.
df[(df['animal'] == 'cat') & (df['age'] < 3)]
14. Calculate the sum of all visits in df (i.e. the total number of visits).
df['visits'].sum()
15. Calculate the mean age for each different animal in df.
df.groupby('animal')['age'].mean()
16. Append a new row 'k' to df with your choice of values for each column. Then delete that row to
return the original DataFrame.
df.loc['k'] = [5.5, 'dog', 'no', 2]
df = df.drop('k')
19. The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of
boolean values: 'yes' should be True and 'no' should be False.
df['priority'] = df['priority'].map({'yes': True, 'no': False})
21. For each animal type and each number of visits, find the mean age. In other words, each row is
an animal, each column is a number of visits and the values are the mean ages (hint: use a pivot
table).
df.pivot_table(index='animal', columns='visits', values='age',
aggfunc='mean')
Difficulty: medium
The previous section was tour through some basic but essential DataFrame operations. Below are
some ways that you might need to cut your data, but for which there is no single "out of the box"
method.
22. You have a DataFrame df with a column 'A' of integers. For example:
1, 2, 3, 4, 5, 6, 7
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
df.loc[df['A'].shift() != df['A']]
df.drop_duplicates(subset='A')
df.sub(df.mean(axis=1), axis=0)
24. Suppose you have DataFrame with 10 columns of real numbers, for example:
df.sum().idxmin()
25. How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are
duplicates)?
df = pd.DataFrame(np.random.randint(0, 2, size=(10, 3)))
len(df) - df.duplicated(keep=False).sum()
len(df.drop_duplicates(keep=False))
26. In the cell below, you have a DataFrame df that consists of 10 columns of floating-point
numbers. Exactly 5 entries in each row are NaN values.
For each row of the DataFrame, find the column which contains the third NaN value.
data = [[0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan],
[ nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16],
[ nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01],
[0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan],
[ nan, nan, 0.41, nan, 0.05, nan, 0.61, nan, 0.48, 0.68]]
columns = list('abcdefghij')
df = pd.DataFrame(data, columns=columns)
(df.isnull().cumsum(axis=1) == 3).idxmax(axis=1)
27. A DataFrame has a column of groups 'grps' and and column of integer values 'vals':
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
For each group, find the sum of the three greatest values. You should end up with the answer as
follows:
grps
a 409
b 156
c 345
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
df.groupby('grps')['vals'].nlargest(3).sum(level=0)
28. The DataFrame df constructed below has two integer columns 'A' and 'B'. The values in 'A' are
between 1 and 100 (inclusive).
For each group of 10 consecutive integers in 'A' (i.e. (0, 10], (10, 20], ...), calculate the sum of the
corresponding values in column 'B'.
...but all are solvable using just the usual pandas/NumPy methods (and so avoid using
explicit for loops).
Difficulty: hard
29. Consider a DataFrame df where there is an integer column 'X':
[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]
# http://stackoverflow.com/questions/30730981/how-to-count-distance-to-the-
previous-zero-in-pandas-series/
# credit: Behzad Nouri
Here's an alternative approach based on a cookbook recipe:
df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})
x = (df['X'] != 0).cumsum()
y = x != x.shift()
df['Y'] = y.groupby((y != y.shift()).cumsum()).cumsum()
30. Consider the DataFrame constructed below which contains rows and columns of numerical data.
Create a list of the column-row index locations of the 3 largest values in this DataFrame. In this case,
the answer should be:
df.unstack().sort_values()[-3:].index.tolist()
# http://stackoverflow.com/questions/14941261/index-and-column-for-the-max-
value-in-pandas-dataframe/
# credit: DSM
31. You are given the DataFrame below with a column of group IDs, 'grps', and a column of
corresponding integer values, 'vals'.
0 -12 A 13.6
1 -7 B 28.0
2 -14 A 13.6
3 4 A 4.0
4 -7 A 13.6
5 28 B 28.0
6 -2 A 13.6
7 -1 A 13.6
8 8 A 8.0
9 -2 B 28.0
10 28 A 28.0
11 12 A 12.0
12 16 A 16.0
13 -24 A 13.6
14 -12 A 13.6
df = pd.DataFrame({"vals": np.random.RandomState(31).randint(-30, 30,
size=15),
"grps": np.random.RandomState(31).choice(["A", "B"],
15)})
def replace(group):
mask = group<0
group[mask] = group[~mask].mean()
return group
df.groupby(['grps'])['vals'].transform(replace)
# http://stackoverflow.com/questions/14760757/replacing-values-with-
groupby-means/
# credit: unutbu
32. Implement a rolling mean over groups with window size 3, which ignores NaN value. For
example consider the following DataFrame:
0 1.000000
1 1.500000
2 3.000000
3 3.000000
4 1.666667
5 3.000000
6 3.000000
7 2.000000
8 3.666667
9 2.000000
10 4.500000
11 4.000000
E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5.
Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-
NaN values are used to compute the mean (3+3)/2)
df = pd.DataFrame({'group': list('aabbabbbabab'),
'value': [1, 2, 3, np.nan, 2, 3, np.nan, 1, 7, 3,
np.nan, 8]})
# http://stackoverflow.com/questions/36988123/pandas-groupby-and-rolling-
apply-ignoring-nans/
Difficulty: easy/medium
pandas is fantastic for working with dates and times. These puzzles explore some of this
functionality.
33. Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of
random numbers. Let's call this Series s.
dti = pd.date_range(start='2015-01-01', end='2015-12-31', freq='B')
s = pd.Series(np.random.rand(len(dti)), index=dti)
s
36. For each group of four consecutive calendar months in s, find the date on which the highest
value occurred.
s.groupby(pd.Grouper(freq='4M')).idxmax()
37. Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and
2016.
pd.date_range('2015-01-01', '2016-12-31', freq='WOM-3THU')
Cleaning Data
Difficulty: easy/medium
It happens all the time: someone gives you data containing malformed strings, Python, lists and
missing data. How do you tidy it up so you can get on with the analysis?
(It's some flight data I made up; it's not meant to be accurate in any way.)
38. Some values in the the FlightNumber column are missing (they are NaN). These numbers are
meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Modify df to fill
in these missing numbers and make the column an integer column (instead of a float column).
df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN',
'londON_StockhOlm',
'Budapest_PaRis', 'Brussels_londOn'],
'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],
'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],
'Airline': ['KLM(!)', '<Air France> (12)', '(British
Airways. )',
'12. Air France', '"Swiss Air"']})
df['FlightNumber'] = df['FlightNumber'].interpolate().astype(int)
df
39. The From_To column would be better as two separate columns! Split each string on the
underscore delimiter _ to give a new temporary DataFrame called 'temp' with the correct values.
Assign the correct column names 'From' and 'To' to this temporary DataFrame.
temp = df.From_To.str.split('_', expand=True)
temp.columns = ['From', 'To']
temp
40. Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame
'temp'. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become
"London".)
temp['From'] = temp['From'].str.capitalize()
temp['To'] = temp['To'].str.capitalize()
temp
41. Delete the From_To column from 41. Delete the From_To column from df and attach the
temporary DataFrame 'temp' from the previous questions. df and attach the temporary DataFrame
from the previous questions.
df = df.drop('From_To', axis=1)
df = df.join(temp)
df
42. In the Airline column, you can see some extra puctuation and symbols have appeared around
the airline names. Pull out just the airline name. E.g. '(British Airways. )' should
become 'British Airways'.
df['Airline'] = df['Airline'].str.extract('([a-zA-Z\s]+)',
expand=False).str.strip()
# note: using .strip() gets rid of any leading/trailing spaces
df
43. In the RecentDelays column, the values have been entered into the DataFrame as a list. We
would like each first value in its own column, each second value in its own column, and so on. If there
isn't an Nth value, the value should be NaN.
Expand the Series of lists into a new DataFrame named 'delays', rename the columns 'delay_1',
'delay_2', etc. and replace the unwanted RecentDelays column in df with 'delays'.
# there are several ways to do this, but the following approach is possibly
the simplest
delays = df['RecentDelays'].apply(pd.Series)
df = df.drop('RecentDelays', axis=1).join(delays)
df
Using MultiIndexes
Difficulty: medium
Previous exercises have seen us analysing data from DataFrames equipped with a single index level.
However, pandas also gives you the possibilty of indexing your data using multiple levels. This is very
much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by
using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.
The set of puzzles below explores how you might use multiple index levels to enhance data analysis.
To warm up, we'll look make a Series with two index levels.
44. Given the lists letters = ['A', 'B', 'C'] and numbers = list(range(10)), construct a
MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call
this Series s.
letters = ['A', 'B', 'C']
numbers = list(range(10))
mi = pd.MultiIndex.from_product([letters, numbers])
s = pd.Series(np.random.rand(30), index=mi)
s
45. Check the index of s is lexicographically sorted (this is a necessary proprty for indexing to work
correctly with a MultiIndex).
s.index.is_lexsorted()
# or more verbosely...
s.index.lexsort_depth == s.index.nlevels
46. Select the labels 1, 3 and 6 from the second level of the MultiIndexed Series.
s.loc[:, [1, 3, 6]]
47. Slice the Series s; slice up to label 'B' for the first level and from label 5 onwards for the second
level.
s.loc[pd.IndexSlice[:'B', 5:]]
48. Sum the values in s for each label in the first level (you should have Series giving you a total for
labels A, B and C).
s.sum(level=0)
49. Suppose that sum() (and other methods) did not accept a level keyword argument. How else
could you perform the equivalent of s.sum(level=1)?
# One way is to use .unstack()...
# This method should convince you that s is essentially just a regular
DataFrame in disguise!
s.unstack().sum(axis=0)
50. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this
new Series properly lexsorted? If not, sort it.
new_s = s.swaplevel(0, 1)
if not new_s.index.is_lexsorted():
new_s = new_s.sort_index()
new_s
Minesweeper
If you've ever used an older version of Windows, there's a good chance you've played with
Minesweeper:
• https://en.wikipedia.org/wiki/Minesweeper_(video_game)
If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a
mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number
telling you how many mines are found in the squares that are immediately adjacent. The aim of the
game is to uncover all squares in the grid that do not contain a mine.
In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper:
coordinates of the squares, whether the square contains a mine and the number of mines found on
adjacent squares.
51. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.
X = 5
Y = 4
To begin, generate a DataFrame df with two columns, 'x' and 'y' containing every coordinate for
this grid. That is, the DataFrame should start:
x y
0 0 0
1 0 1
2 0 2
...
X = 5
Y = 4
p = pd.core.reshape.util.cartesian_product([np.arange(X), np.arange(Y)])
df = pd.DataFrame(np.asarray(p).T, columns=['x', 'y'])
df
52. For this DataFrame df, create a new column of zeros (safe) and ones (mine). The probability of a
mine occuring at each location should be 0.4.
# One way is to draw samples from a binomial distribution.
53. Now create a new column for this DataFrame called 'adjacent'. This column should contain the
number of mines found on adjacent squares in the grid.
(E.g. for the first row, which is the entry for the coordinate (0, 0), count how many mines are found
on the coordinates (0, 1), (1, 0) and (1, 1).)
# Here is one way to solve using merges.
# It's not necessary the optimal way, just
# the solution I thought of first...
df['adjacent'] = \
df.merge(df + [ 1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, -1, 0], on=['x', 'y'], how='left')\
.iloc[:, 3:]\
.sum(axis=1)
54. For rows of the DataFrame that contain a mine, set the value in the 'adjacent' column to NaN.
df.loc[df['mine'] == 1, 'adjacent'] = np.nan
55. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the x coordinate,
rows are the y coordinate.
df.drop('mine', axis=1).set_index(['y', 'x']).unstack()
Plotting
Difficulty: medium
To really get a good understanding of the data contained in your DataFrame, it is often essential to
create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is
baked into pandas and the puzzles below explore some of what's possible with the library.
56. Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames
very user-friendly! Plotting in a notebook environment usually makes use of the following
boilerplate:
%matplotlib inline tells the notebook to show plots inline, instead of creating them in a separate
window.
plt.style.use('ggplot') is a style theme that most people find agreeable, based upon the styling
of R's ggplot package.
For starters, make a scatter plot of this random data, but use black X's instead of the default markers.
df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})
df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})
57. Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping
track of his performance at work over time, as well as how good he was feeling that day, and
whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of
this DataFrame.
(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to
represent size by 10 or more)
The chart doesn't have to be pretty: this isn't a course in data viz!
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
58. What if we want to plot multiple things? Pandas allows you to pass in a matplotlib Axis object for
plots, and plots will also return an Axis object.
Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in
millions)
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze
stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock
during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed
above its opening price, or red if below.
This was initially designed to be a pandas plotting challenge, but it just so happens that this type of
plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have
provided a function that will plot the chart for you so long as you can use pandas to get the data
into the correct format.
Your first step should be to get the data in the correct format using pandas' time-series grouping
function. We would like each candle to represent an hour's worth of data. You can write your own
aggregation function which returns the open/high/low/close, but pandas has a built-in which also
does this.
The below cell contains helper functions. Call day_stock_data() to generate a DataFrame
containing the prices a hypothetical stock sold for, and the time the sale occurred.
Call plot_candlestick(df) on your properly aggregated and formatted stock data to print the
candlestick chart.
#This function is designed to create semi-interesting random stock price
data
import numpy as np
def float_to_time(x):
return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" +
str(int(x*60 % 1 * 60)).zfill(2)
def day_stock_data():
#NYSE is open from 9:30 to 4:00
time = 9.5
price = 100
results = [(float_to_time(time), price)]
while time < 16:
elapsed = np.random.exponential(.001)
time += elapsed
if time > 16:
break
price_diff = np.random.uniform(.999, 1.001)
price *= price_diff
results.append((float_to_time(time), price))
def plot_candlestick(agg):
fig, ax = plt.subplots()
for time in agg.index:
ax.plot([time.hour] * 2, agg.loc[time, ["high","low"]].values,
color = "black")
ax.plot([time.hour] * 2, agg.loc[time, ["open","close"]].values,
color = agg.loc[time, "color"], linewidth = 10)
ax.set_xlim((8,16))
ax.set_ylabel("Price")
ax.set_xlabel("Hour")
ax.set_title("OHLC of Stock Value During Trading Day")
plt.show()
59. Generate a day's worth of random stock data, and aggregate / reformat it so that it has hourly
summaries of the opening, highest, lowest, and closing prices
df = day_stock_data()
df.head()
df.set_index("time", inplace = True)
agg = df.resample("H").ohlc()
agg.columns = agg.columns.droplevel()
agg["color"] = (agg.close > agg.open).map({True:"green",False:"red"})
agg.head()
60. Now that you have your properly-formatted data, try to plot it yourself as a candlestick chart. Use
the plot_candlestick(df) function above, or matplotlib's plot documentation if you get stuck.
plot_candlestick(agg)