Filtering and Annotation Tutorial

Filter

You can filter the rows of a table with Table.filter. This returns a table of those rows for which the expression evaluates to True.

[1]:
import hail as hl

hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
Initializing Spark and Hail with default parameters...
Running on Apache Spark version 2.4.0
SparkUI available at http://batch-357-job-46-3e1500:4040
Welcome to
     __  __     <>__
    / /_/ /__  __/ /
   / __  / _ `/ / /
  /_/ /_/\_,_/_/_/   version 0.2.26-425530d28d47
LOGGING: writing to /ka5dult9xbju/python/hail/docs/tutorials/hail-20191025-1706-0.2.26-425530d28d47.log
2019-10-25 17:06:04 Hail: INFO: Movie Lens files found!
[2]:
users.filter(users.occupation == 'programmer').count()
[2]:
66

We can also express this query in multiple ways using aggregations:

[3]:
users.aggregate(hl.agg.filter(users.occupation == 'programmer', hl.agg.count()))
[3]:
66
[4]:
users.aggregate(hl.agg.counter(users.occupation == 'programmer'))[True]
[4]:
66

Annotate

You can add new fields to a table with annotate. As an example, let’s create a new column called cleaned_occupation that replaces missing entries in the occupation field labeled as ‘other’ with ‘none.’

[5]:
missing_occupations = hl.set(['other', 'none'])

t = users.annotate(
    cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
                                 hl.null('str'),
                                 users.occupation))
t.show()
idagesexoccupationzipcodecleaned_occupation
int32int32strstrstrstr
124"M""technician""85711""technician"
253"F""other""94043"NA
323"M""writer""32067""writer"
424"M""technician""43537""technician"
533"F""other""15213"NA
642"M""executive""98101""executive"
757"M""administrator""91344""administrator"
836"M""administrator""05201""administrator"
929"M""student""01002""student"
1053"M""lawyer""90703""lawyer"

showing top 10 rows

Compare this to what we had before:

[6]:
users.show()
idagesexoccupationzipcode
int32int32strstrstr
124"M""technician""85711"
253"F""other""94043"
323"M""writer""32067"
424"M""technician""43537"
533"F""other""15213"
642"M""executive""98101"
757"M""administrator""91344"
836"M""administrator""05201"
929"M""student""01002"
1053"M""lawyer""90703"

showing top 10 rows

Note: annotate is functional: it doesn’t mutate users, but returns a new table. This is also true of filter. In fact, all operations in Hail are functional.

[7]:
users.describe()
----------------------------------------
Global fields:
    None
----------------------------------------
Row fields:
    'id': int32
    'age': int32
    'sex': str
    'occupation': str
    'zipcode': str
----------------------------------------
Key: ['id']
----------------------------------------

Select and Transmute

There are two other annotate methods: select and transmute. select allows you to create new tables from old ones by selecting existing fields, or creating new ones.

First, let’s extract the sex and occupation fields:

[8]:
users.select(users.sex, users.occupation).show()
idsexoccupation
int32strstr
1"M""technician"
2"F""other"
3"M""writer"
4"M""technician"
5"F""other"
6"M""executive"
7"M""administrator"
8"M""administrator"
9"M""student"
10"M""lawyer"

showing top 10 rows

We can also create a new field that stores the age relative to the average. Note that new fields must be assigned a name (in this case mean_shifted_age):

[9]:
mean_age = round(users.aggregate(hl.agg.stats(users.age)).mean)
users.select(users.sex, users.occupation, mean_shifted_age = users.age - mean_age).show()
idsexoccupationmean_shifted_age
int32strstrint32
1"M""technician"-10
2"F""other"19
3"M""writer"-11
4"M""technician"-10
5"F""other"-1
6"M""executive"8
7"M""administrator"23
8"M""administrator"2
9"M""student"-5
10"M""lawyer"19

showing top 10 rows

transmute replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. transmute is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with transmute replacing select.

[10]:
missing_occupations = hl.set(['other', 'none'])

t = users.select(
    cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
                                 hl.null('str'),
                                 users.occupation))
t.show()
idcleaned_occupation
int32str
1"technician"
2NA
3"writer"
4"technician"
5NA
6"executive"
7"administrator"
8"administrator"
9"student"
10"lawyer"

showing top 10 rows

[11]:
missing_occupations = hl.set(['other', 'none'])

t = users.transmute(
    cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
                                 hl.null('str'),
                                 users.occupation))
t.show()
idagesexzipcodecleaned_occupation
int32int32strstrstr
124"M""85711""technician"
253"F""94043"NA
323"M""32067""writer"
424"M""43537""technician"
533"F""15213"NA
642"M""98101""executive"
757"M""91344""administrator"
836"M""05201""administrator"
929"M""01002""student"
1053"M""90703""lawyer"

showing top 10 rows

Global Fields

Finally, you can add global fields with annotate_globals. Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.

[12]:
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
----------------------------------------
Global fields:
    'cohort': int32
    'cloudable': set<str>
----------------------------------------
Row fields:
    'id': int32
    'age': int32
    'sex': str
    'occupation': str
    'zipcode': str
----------------------------------------
Key: ['id']
----------------------------------------
[13]:
t.cloudable
[13]:
<SetExpression of type set<str>>
[14]:
hl.eval(t.cloudable)
[14]:
{'sample1', 'sample10', 'sample15'}

Exercises

[ ]: