CSV Data¶

In this lesson, we'll review the dictionary features and learn about the CSV data file format. By the end of this lesson, students will be able to:

  • Identify the list of dictionaries corresponding to some CSV data.
  • Loop over a list of dictionaries (CSV rows) and access dictionary values (CSV columns).
In [1]:
import doctest

Review: Dictionary functions¶

Dictionaries, like lists, are also mutable data structures so they have functions to help store and retrieve elements.

  • d.pop(key) removes key from d.
  • d.keys() returns a collection of all the keys in d.
  • d.values() returns a collection of all the values in d.
  • d.items() returns a collection of all (key, value) tuples in d.

There are different ways to loop over a dictionary.

In [2]:
dictionary = {"a": 1, "b": 2, "c": 3}
for key in dictionary:
    print(key, dictionary[key])
a 1
b 2
c 3
In [6]:
for k,v in dictionary.items():
    print(k, v)
a 1
b 2
c 3
In [7]:
dictionary.pop("a")
Out[7]:
1
In [8]:
dictionary
Out[8]:
{'b': 2, 'c': 3}

None in Python¶

In the lesson on File Processing, we saw a function to count the occurrences of each token in a file as a dict where the keys are words and the values are counts.

Let's debug the following function most_frequent that takes a dictionary as input and returns the word with the highest count. If the input were a list, we could index the zero-th element from the list and loop over the remaining values by slicing the list. But it's harder to do this with a dictionary.

Python has a special None keyword, like null in Java, that represents a placeholder value.

In [10]:
def most_frequent(counts):
    """
    Returns the token in the given dictionary with the highest count, or None if empty.

    >>> most_frequent({"green": 2, "eggs": 6, "and": 3, "yam": 2})
    'eggs'
    >>> most_frequent({}) # None is not displayed as output

    """
    max_word = None
    for word in counts:
        if max_word is None or counts[word] > counts[max_word]:
            max_word = word
    return max_word


doctest.run_docstring_examples(most_frequent, globals())

Loop unpacking¶

When we need keys and values, we can loop over and unpack each key-value pair by looping over the dictionary.items().

In [16]:
dictionary = {"a": 1, "b": 2, "c": 3}
for key, value in dictionary.items():
    print(key, value)
a 1
b 2
c 3

Loop unpacking is not only useful for dictionaries, but also for looping over other sequences such as enumerate and zip. enumerate is a built-in function that takes a sequence and returns another sequence of pairs representing the element index and the element value.

In [17]:
with open("poem.txt") as f:
    for i, line in enumerate(f.readlines()):
        print(i, line[:-1])
0 she sells
1 sea
2 shells by
3 the sea shore

zip is another built-in function that takes one or more sequences and returns a sequence of tuples consisting of the first element from each given sequence, the second element from each given sequence, etc. If the sequences are not all the same length, zip stops after yielding all elements from the shortest sequence.

In [20]:
arabic_nums = [  1,    2,     3,    4,   5]
alpha_nums  = ["a",  "b",   "c",  "d", "e"]
roman_nums  = ["i", "ii", "iii", "iv", "v"]

for arabic, alpha, roman in zip(arabic_nums, alpha_nums, roman_nums):
    print(arabic, alpha, roman)
1 a i
2 b ii
3 c iii
4 d iv
5 e v

Comma-separated values¶

In data science, we often work with tabular data such as the following table representing the names and hours of some of our TAs.

Name Hours
Diana 10
Thrisha 15
Yuxiang 20
Sheamin 12

A table has two main components to it:

  • Rows corresponding to each entry, such as each individual TA.
  • Columns corresponding to (required or optional) fields for each entry, such as TA name and TA hours.

A comma-separated values (CSV) file is a particular way of representing a table using only plain text. Here is the corresponding CSV file for the above table. Each row is separated with a newline. Each column is separated with a single comma ,.

Name,Hours
Diana,10
Thrisha,15
Yuxiang,20
Sheamin,12

We'll learn a couple ways of processing CSV data in this course, first of which is representing the data as a list of dictionaries.

In [21]:
staff = [
    {"Name": "Yuxiang", "Hours": 20},
    {"Name": "Thrisha", "Hours": 15},
    {"Name": "Diana", "Hours": 10},
    {"Name": "Sheamin", "Hours": 12},
]
staff
Out[21]:
[{'Name': 'Yuxiang', 'Hours': 20},
 {'Name': 'Thrisha', 'Hours': 15},
 {'Name': 'Diana', 'Hours': 10},
 {'Name': 'Sheamin', 'Hours': 12}]

To see the total number of TA hours available, we can loop over the list of dictionaries and sum the "Hours" value.

In [22]:
total_hours = 0
for ta in staff:
    total_hours += ta["Hours"]
total_hours
Out[22]:
57

What are some different ways to get the value of Thrisha's hours?

In [23]:
for ta in staff:
    if ta["Name"] == "Thrisha":
        print(ta["Hours"])
15

Poll Question: select the right option

In [ ]:
staff[1]["Hours"]
staff["Hours"][1]
staff["Thrisha"]["Hours"]
staff["Hours"]["Thrisha"]
In [24]:
staff[1]["Hours"]
Out[24]:
15

Reading CSV files using Python's built-in csv package¶

Suppose we have a dataset of earthquakes around the world stored in the CSV file earthquakes.csv.

In [3]:
import csv
In [27]:
earthquakes = []
with open("earthquakes.csv") as f:
    reader = csv.DictReader(f)
    for row in reader:
        earthquakes.append(row)
earthquakes[:5]
Out[27]:
[{'id': 'nc72666881',
  'year': '2016',
  'month': '7',
  'day': '27',
  'latitude': '37.6723333',
  'longitude': '-121.619',
  'name': 'California',
  'magnitude': '1.43'},
 {'id': 'us20006i0y',
  'year': '2016',
  'month': '7',
  'day': '27',
  'latitude': '21.5146',
  'longitude': '94.5721',
  'name': 'Burma',
  'magnitude': '4.9'},
 {'id': 'nc72666891',
  'year': '2016',
  'month': '7',
  'day': '27',
  'latitude': '37.5765',
  'longitude': '-118.85916670000002',
  'name': 'California',
  'magnitude': '0.06'},
 {'id': 'nc72666896',
  'year': '2016',
  'month': '7',
  'day': '27',
  'latitude': '37.5958333',
  'longitude': '-118.99483329999998',
  'name': 'California',
  'magnitude': '0.4'},
 {'id': 'nn00553447',
  'year': '2016',
  'month': '7',
  'day': '27',
  'latitude': '39.3775',
  'longitude': '-119.845',
  'name': 'Nevada',
  'magnitude': '0.3'}]

csv.DictWriter also exists; you can do the following to write a row into a csv file:

  • writeheader(): Write a row with the field names (as specified in the constructor) to the writer’s file object.
  • writerow(row) or writerows(rows): Write the row/rows parameter to the writer’s file object.

Here, row is a dictionary and rows is a list of dictionaries.

Practice: Largest earthquake place¶

Write a function largest_earthquake_place that takes the earthquake data represented as a list of dictionaries and returns the name of the location that experienced the largest earthquake. If there are no rows in the dataset (no data at all), return None.

id year month day latitude longitude name magnitude
nc72666881 2016 7 27 37.672 -121.619 California 1.43
us20006i0y 2016 7 27 21.515 94.572 Burma 4.9
nc72666891 2016 7 27 37.577 -118.859 California 0.06
nc72666896 2016 7 27 37.596 -118.995 California 0.4
nn00553447 2016 7 27 39.378 -119.845 Nevada 0.3

For example, considering only the data shown above, the result would be "Burma" because it had the earthquake with the largest magnitude (4.9).

In [30]:
def largest_earthquake_place(path):
    """
    Returns the name of the place with the largest-magnitude earthquake in the specified CSV file.

    >>> largest_earthquake_place("earthquakes.csv")
    'Northern Mariana Islands'
    """
    earthquakes = []
    with open(path) as f:
        reader = csv.DictReader(f)
        for row in reader:
            earthquakes.append(row)
    largest_earthquake = None
    for earthquake in earthquakes:
    # for i in range(len(earthquakes)):
        # earthquake = earthquakes[i]
    # for i, earthquake in enumerate(earthquakes):
        if largest_earthquake is None or float(earthquake["magnitude"]) > float(largest_earthquake["magnitude"]):
            largest_earthquake = earthquake
    return largest_earthquake["name"]

doctest.run_docstring_examples(largest_earthquake_place, globals())

Let's see another solution done with a library "pandas".

In [31]:
import pandas as pd
In [32]:
def largest_earthquake_place_pandas(path):
    """
    Returns the name of the place with the largest-magnitude earthquake in the specified CSV file.

    >>> largest_earthquake_place_pandas("earthquakes.csv")
    'Northern Mariana Islands'
    """
    earthquakes = pd.read_csv(path)
    return earthquakes.loc[earthquakes["magnitude"].idxmax()]["name"]

doctest.run_docstring_examples(largest_earthquake_place_pandas, globals())
In [36]:
earthquakes = pd.read_csv("earthquakes.csv")
earthquakes.head()
Out[36]:
id year month day latitude longitude name magnitude
0 nc72666881 2016 7 27 37.672333 -121.619000 California 1.43
1 us20006i0y 2016 7 27 21.514600 94.572100 Burma 4.90
2 nc72666891 2016 7 27 37.576500 -118.859167 California 0.06
3 nc72666896 2016 7 27 37.595833 -118.994833 California 0.40
4 nn00553447 2016 7 27 39.377500 -119.845000 Nevada 0.30
In [37]:
type(earthquakes)
Out[37]:
pandas.core.frame.DataFrame
In [4]:
earthquakes = []
with open("earthquakes.csv") as f:
    reader = csv.DictReader(f)
    for row in reader:
        earthquakes.append(row)
In [5]:
for i in earthquakes:
    print(type(i))
    break
<class 'dict'>