GWAS Tutorial

This notebook is designed to provide a broad overview of Hail’s functionality, with emphasis on the functionality to manipulate and query a genetic dataset. We walk through a genome-wide SNP association test, and demonstrate the need to control for confounding caused by population stratification.

[1]:
import hail as hl
hl.init()
Loading BokehJS ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Running on Apache Spark version 3.5.0
SparkUI available at http://hostname-09f2439d4b:4040
Welcome to
     __  __     <>__
    / /_/ /__  __/ /
   / __  / _ `/ / /
  /_/ /_/\_,_/_/_/   version 0.2.133-4c60fddb171a
LOGGING: writing to /io/hail/python/hail/docs/tutorials/hail-20241004-2003-0.2.133-4c60fddb171a.log

If the above cell ran without error, we’re ready to go!

Before using Hail, we import some standard Python libraries for use throughout the notebook.

[2]:
from hail.plot import show
from pprint import pprint
hl.plot.output_notebook()
Loading BokehJS ...

Download public 1000 Genomes data

We use a small chunk of the public 1000 Genomes dataset, created by downsampling the genotyped SNPs in the full VCF to about 20 MB. We will also integrate sample and variant metadata from separate text files.

These files are hosted by the Hail team in a public Google Storage bucket; the following cell downloads that data locally.

[3]:
hl.utils.get_1kg('data/')
SLF4J: Failed to load class "org.slf4j.impl.StaticMDCBinder".
SLF4J: Defaulting to no-operation MDCAdapter implementation.
SLF4J: See http://www.slf4j.org/codes.html#no_static_mdc_binder for further details.
[Stage 1:==========================================>              (12 + 4) / 16]

Importing data from VCF

The data in a VCF file is naturally represented as a Hail MatrixTable. By first importing the VCF file and then writing the resulting MatrixTable in Hail’s native file format, all downstream operations on the VCF’s data will be MUCH faster.

[4]:
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
[Stage 3:>                                                          (0 + 1) / 1]

Next we read the written file, assigning the variable mt (for matrix table).

[5]:
mt = hl.read_matrix_table('data/1kg.mt')

Getting to know our data

It’s important to have easy ways to slice, dice, query, and summarize a dataset. Some of this functionality is demonstrated below.

The rows method can be used to get a table with all the row fields in our MatrixTable.

We can use rows along with select to pull out 5 variants. The select method takes either a string refering to a field name in the table, or a Hail Expression. Here, we leave the arguments blank to keep only the row key fields, locus and alleles.

Use the show method to display the variants.

[6]:
mt.rows().select().show(5)
locus
alleles
locus<GRCh37>array<str>
1:904165["G","A"]
1:909917["G","A"]
1:986963["C","T"]
1:1563691["T","G"]
1:1707740["T","G"]

showing top 5 rows

Alternatively:

[7]:
mt.row_key.show(5)
locus
alleles
locus<GRCh37>array<str>
1:904165["G","A"]
1:909917["G","A"]
1:986963["C","T"]
1:1563691["T","G"]
1:1707740["T","G"]

showing top 5 rows

Here is how to peek at the first few sample IDs:

[8]:
mt.s.show(5)
s
str
"HG00096"
"HG00099"
"HG00105"
"HG00118"
"HG00129"

showing top 5 rows

To look at the first few genotype calls, we can use entries along with select and take. The take method collects the first n rows into a list. Alternatively, we can use the show method, which prints the first n rows to the console in a table format.

Try changing take to show in the cell below.

[9]:
mt.entry.take(5)
[9]:
[Struct(GT=Call(alleles=[0, 0], phased=False), AD=[4, 0], DP=4, GQ=12, PL=[0, 12, 147]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[8, 0], DP=8, GQ=24, PL=[0, 24, 315]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[8, 0], DP=8, GQ=23, PL=[0, 23, 230]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[7, 0], DP=7, GQ=21, PL=[0, 21, 270]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[5, 0], DP=5, GQ=15, PL=[0, 15, 205])]

Adding column fields

A Hail MatrixTable can have any number of row fields and column fields for storing data associated with each row and column. Annotations are usually a critical part of any genetic study. Column fields are where you’ll store information about sample phenotypes, ancestry, sex, and covariates. Row fields can be used to store information like gene membership and functional impact for use in QC or analysis.

In this tutorial, we demonstrate how to take a text file and use it to annotate the columns in a MatrixTable.

The file provided contains the sample ID, the population and “super-population” designations, the sample sex, and two simulated phenotypes (one binary, one discrete).

This file can be imported into Hail with import_table. This function produces a Table object. Think of this as a Pandas or R dataframe that isn’t limited by the memory on your machine – behind the scenes, it’s distributed with Spark.

[10]:
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
         .key_by('Sample'))

A good way to peek at the structure of a Table is to look at its schema.

[11]:
table.describe()
----------------------------------------
Global fields:
    None
----------------------------------------
Row fields:
    'Sample': str
    'Population': str
    'SuperPopulation': str
    'isFemale': bool
    'PurpleHair': bool
    'CaffeineConsumption': int32
----------------------------------------
Key: ['Sample']
----------------------------------------

To peek at the first few values, use the show method:

[12]:
table.show(width=100)
Sample
Population
SuperPopulation
isFemale
PurpleHair
CaffeineConsumption
strstrstrboolboolint32
"HG00096""GBR""EUR"FalseFalse4
"HG00097""GBR""EUR"TrueTrue4
"HG00098""GBR""EUR"FalseFalse5
"HG00099""GBR""EUR"TrueFalse4
"HG00100""GBR""EUR"TrueFalse5
"HG00101""GBR""EUR"FalseTrue1
"HG00102""GBR""EUR"TrueTrue6
"HG00103""GBR""EUR"FalseTrue5
"HG00104""GBR""EUR"TrueFalse5
"HG00105""GBR""EUR"FalseFalse4

showing top 10 rows

Now we’ll use this table to add sample annotations to our dataset, storing the annotations in column fields in our MatrixTable. First, we’ll print the existing column schema:

[13]:
print(mt.col.dtype)
struct{s: str}

We use the annotate_cols method to join the table with the MatrixTable containing our dataset.

[14]:
mt = mt.annotate_cols(pheno = table[mt.s])
[15]:
mt.col.describe()
--------------------------------------------------------
Type:
        struct {
        s: str,
        pheno: struct {
            Population: str,
            SuperPopulation: str,
            isFemale: bool,
            PurpleHair: bool,
            CaffeineConsumption: int32
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7f045fa6f610>
Index:
    ['column']
--------------------------------------------------------

Query functions and the Hail Expression Language

Hail has a number of useful query functions that can be used for gathering statistics on our dataset. These query functions take Hail Expressions as arguments.

We will start by looking at some statistics of the information in our table. The aggregate method can be used to aggregate over rows of the table.

counter is an aggregation function that counts the number of occurrences of each unique element. We can use this to pull out the population distribution by passing in a Hail Expression for the field that we want to count by.

[16]:
pprint(table.aggregate(hl.agg.counter(table.SuperPopulation)))
{'AFR': 1018, 'AMR': 535, 'EAS': 617, 'EUR': 669, 'SAS': 661}

stats is an aggregation function that produces some useful statistics about numeric collections. We can use this to see the distribution of the CaffeineConsumption phenotype.

[17]:
pprint(table.aggregate(hl.agg.stats(table.CaffeineConsumption)))
Struct(mean=3.9837142857142855,
       stdev=1.7021055628070711,
       min=-1.0,
       max=10.0,
       n=3500,
       sum=13943.0)

However, these metrics aren’t perfectly representative of the samples in our dataset. Here’s why:

[18]:
table.count()
[18]:
3500
[19]:
mt.count_cols()
[19]:
284

Since there are fewer samples in our dataset than in the full thousand genomes cohort, we need to look at annotations on the dataset. We can use aggregate_cols to get the metrics for only the samples in our dataset.

[20]:
mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))
[20]:
{'AFR': 76, 'AMR': 34, 'EAS': 72, 'EUR': 47, 'SAS': 55}
[21]:
pprint(mt.aggregate_cols(hl.agg.stats(mt.pheno.CaffeineConsumption)))
Struct(mean=4.415492957746479,
       stdev=1.577763427465917,
       min=0.0,
       max=9.0,
       n=284,
       sum=1254.0)

The functionality demonstrated in the last few cells isn’t anything especially new: it’s certainly not difficult to ask these questions with Pandas or R dataframes, or even Unix tools like awk. But Hail can use the same interfaces and query language to analyze collections that are much larger, like the set of variants.

Here we calculate the counts of each of the 12 possible unique SNPs (4 choices for the reference base * 3 choices for the alternate base).

To do this, we need to get the alternate allele of each variant and then count the occurences of each unique ref/alt pair. This can be done with Hail’s counter function.

[22]:
snp_counts = mt.aggregate_rows(hl.agg.counter(hl.Struct(ref=mt.alleles[0], alt=mt.alleles[1])))
pprint(snp_counts)
{Struct(ref='G', alt='A'): 2367,
 Struct(ref='C', alt='T'): 2418,
 Struct(ref='T', alt='A'): 77,
 Struct(ref='C', alt='G'): 150,
 Struct(ref='G', alt='C'): 111,
 Struct(ref='G', alt='T'): 477,
 Struct(ref='T', alt='G'): 466,
 Struct(ref='T', alt='C'): 1864,
 Struct(ref='A', alt='G'): 1929,
 Struct(ref='C', alt='A'): 494,
 Struct(ref='A', alt='T'): 75,
 Struct(ref='A', alt='C'): 451}

We can list the counts in descending order using Python’s Counter class.

[23]:
from collections import Counter
counts = Counter(snp_counts)
counts.most_common()
[23]:
[(Struct(ref='C', alt='T'), 2418),
 (Struct(ref='G', alt='A'), 2367),
 (Struct(ref='A', alt='G'), 1929),
 (Struct(ref='T', alt='C'), 1864),
 (Struct(ref='C', alt='A'), 494),
 (Struct(ref='G', alt='T'), 477),
 (Struct(ref='T', alt='G'), 466),
 (Struct(ref='A', alt='C'), 451),
 (Struct(ref='C', alt='G'), 150),
 (Struct(ref='G', alt='C'), 111),
 (Struct(ref='T', alt='A'), 77),
 (Struct(ref='A', alt='T'), 75)]

It’s nice to see that we can actually uncover something biological from this small dataset: we see that these frequencies come in pairs. C/T and G/A are actually the same mutation, just viewed from from opposite strands. Likewise, T/A and A/T are the same mutation on opposite strands. There’s a 30x difference between the frequency of C/T and A/T SNPs. Why?

The same Python, R, and Unix tools could do this work as well, but we’re starting to hit a wall - the latest gnomAD release publishes about 250 million variants, and that won’t fit in memory on a single computer.

What about genotypes? Hail can query the collection of all genotypes in the dataset, and this is getting large even for our tiny dataset. Our 284 samples and 10,000 variants produce 10 million unique genotypes. The gnomAD dataset has about 5 trillion unique genotypes.

Hail plotting functions allow Hail fields as arguments, so we can pass in the DP field directly here. If the range and bins arguments are not set, this function will compute the range based on minimum and maximum values of the field and use the default 50 bins.

[24]:
p = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP')
show(p)
[Stage 22:>                                                         (0 + 1) / 1]

Quality Control

QC is where analysts spend most of their time with sequencing datasets. QC is an iterative process, and is different for every project: there is no “push-button” solution for QC. Each time the Broad collects a new group of samples, it finds new batch effects. However, by practicing open science and discussing the QC process and decisions with others, we can establish a set of best practices as a community.

QC is entirely based on the ability to understand the properties of a dataset. Hail attempts to make this easier by providing the sample_qc function, which produces a set of useful metrics and stores them in a column field.

[25]:
mt.col.describe()
--------------------------------------------------------
Type:
        struct {
        s: str,
        pheno: struct {
            Population: str,
            SuperPopulation: str,
            isFemale: bool,
            PurpleHair: bool,
            CaffeineConsumption: int32
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7f045fa6f610>
Index:
    ['column']
--------------------------------------------------------
[26]:
mt = hl.sample_qc(mt)
[27]:
mt.col.describe()
--------------------------------------------------------
Type:
        struct {
        s: str,
        pheno: struct {
            Population: str,
            SuperPopulation: str,
            isFemale: bool,
            PurpleHair: bool,
            CaffeineConsumption: int32
        },
        sample_qc: struct {
            dp_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            gq_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            call_rate: float64,
            n_called: int64,
            n_not_called: int64,
            n_filtered: int64,
            n_hom_ref: int64,
            n_het: int64,
            n_hom_var: int64,
            n_non_ref: int64,
            n_singleton: int64,
            n_snp: int64,
            n_insertion: int64,
            n_deletion: int64,
            n_transition: int64,
            n_transversion: int64,
            n_star: int64,
            r_ti_tv: float64,
            r_het_hom_var: float64,
            r_insertion_deletion: float64
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7f046113b820>
Index:
    ['column']
--------------------------------------------------------

Plotting the QC metrics is a good place to start.

[28]:
p = hl.plot.histogram(mt.sample_qc.call_rate, range=(.88,1), legend='Call Rate')
show(p)
[Stage 24:>                                                         (0 + 1) / 1]
[29]:
p = hl.plot.histogram(mt.sample_qc.gq_stats.mean, range=(10,70), legend='Mean Sample GQ')
show(p)
[Stage 27:>                                                         (0 + 1) / 1]

Often, these metrics are correlated.

[30]:
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
[Stage 30:>                                                         (0 + 1) / 1]

Removing outliers from the dataset will generally improve association results. We can make arbitrary cutoffs and use them to filter:

[31]:
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
print('After filter, %d/284 samples remain.' % mt.count_cols())
[Stage 32:>                                                         (0 + 1) / 1]
After filter, 250/284 samples remain.

Next is genotype QC. It’s a good idea to filter out genotypes where the reads aren’t where they should be: if we find a genotype called homozygous reference with >10% alternate reads, a genotype called homozygous alternate with >10% reference reads, or a genotype called heterozygote without a ref / alt balance near 1:1, it is likely to be an error.

In a low-depth dataset like 1KG, it is hard to detect bad genotypes using this metric, since a read ratio of 1 alt to 10 reference can easily be explained by binomial sampling. However, in a high-depth dataset, a read ratio of 10:100 is a sure cause for concern!

[32]:
ab = mt.AD[1] / hl.sum(mt.AD)

filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
                        (mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
                        (mt.GT.is_hom_var() & (ab >= 0.9)))

fraction_filtered = mt.aggregate_entries(hl.agg.fraction(~filter_condition_ab))
print(f'Filtering {fraction_filtered * 100:.2f}% entries out of downstream analysis.')
mt = mt.filter_entries(filter_condition_ab)
[Stage 34:>                                                         (0 + 1) / 1]
Filtering 3.60% entries out of downstream analysis.
[ ]:

Variant QC is a bit more of the same: we can use the variant_qc function to produce a variety of useful statistics, plot them, and filter.

[33]:
mt = hl.variant_qc(mt)
[34]:
mt.row.describe()
--------------------------------------------------------
Type:
        struct {
        locus: locus<GRCh37>,
        alleles: array<str>,
        rsid: str,
        qual: float64,
        filters: set<str>,
        info: struct {
            AC: array<int32>,
            AF: array<float64>,
            AN: int32,
            BaseQRankSum: float64,
            ClippingRankSum: float64,
            DP: int32,
            DS: bool,
            FS: float64,
            HaplotypeScore: float64,
            InbreedingCoeff: float64,
            MLEAC: array<int32>,
            MLEAF: array<float64>,
            MQ: float64,
            MQ0: int32,
            MQRankSum: float64,
            QD: float64,
            ReadPosRankSum: float64,
            set: str
        },
        variant_qc: struct {
            dp_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            gq_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            AC: array<int32>,
            AF: array<float64>,
            AN: int32,
            homozygote_count: array<int32>,
            call_rate: float64,
            n_called: int64,
            n_not_called: int64,
            n_filtered: int64,
            n_het: int64,
            n_non_ref: int64,
            het_freq_hwe: float64,
            p_value_hwe: float64,
            p_value_excess_het: float64
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7f0460fbcb50>
Index:
    ['row']
--------------------------------------------------------

These statistics actually look pretty good: we don’t need to filter this dataset. Most datasets require thoughtful quality control, though. The filter_rows method can help!

Let’s do a GWAS!

First, we need to restrict to variants that are :

[35]:
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
[36]:
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6)
[37]:
print('Samples: %d  Variants: %d' % (mt.count_cols(), mt.count_rows()))
[Stage 37:>                                                         (0 + 1) / 1]
Samples: 250  Variants: 7774

These filters removed about 15% of sites (we started with a bit over 10,000). This is NOT representative of most sequencing datasets! We have already downsampled the full thousand genomes dataset to include more common variants than we’d expect by chance.

In Hail, the association tests accept column fields for the sample phenotype and covariates. Since we’ve already got our phenotype of interest (caffeine consumption) in the dataset, we are good to go:

[38]:
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
                                 x=mt.GT.n_alt_alleles(),
                                 covariates=[1.0])
gwas.row.describe()
[Stage 41:>                                                         (0 + 1) / 1]
--------------------------------------------------------
Type:
        struct {
        locus: locus<GRCh37>,
        alleles: array<str>,
        n: int32,
        sum_x: float64,
        y_transpose_x: float64,
        beta: float64,
        standard_error: float64,
        t_stat: float64,
        p_value: float64
    }
--------------------------------------------------------
Source:
    <hail.table.Table object at 0x7f0460f91d00>
Index:
    ['row']
--------------------------------------------------------

Looking at the bottom of the above printout, you can see the linear regression adds new row fields for the beta, standard error, t-statistic, and p-value.

Hail makes it easy to visualize results! Let’s make a Manhattan plot:

[39]:
p = hl.plot.manhattan(gwas.p_value)
show(p)

This doesn’t look like much of a skyline. Let’s check whether our GWAS was well controlled using a Q-Q (quantile-quantile) plot.

[40]:
p = hl.plot.qq(gwas.p_value)
show(p)

Confounded!

The observed p-values drift away from the expectation immediately. Either every SNP in our dataset is causally linked to caffeine consumption (unlikely), or there’s a confounder.

We didn’t tell you, but sample ancestry was actually used to simulate this phenotype. This leads to a stratified distribution of the phenotype. The solution is to include ancestry as a covariate in our regression.

The linear_regression_rows function can also take column fields to use as covariates. We already annotated our samples with reported ancestry, but it is good to be skeptical of these labels due to human error. Genomes don’t have that problem! Instead of using reported ancestry, we will use genetic ancestry by including computed principal components in our model.

The pca function produces eigenvalues as a list and sample PCs as a Table, and can also produce variant loadings when asked. The hwe_normalized_pca function does the same, using HWE-normalized genotypes for the PCA.

[41]:
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
[Stage 158:>                                                        (0 + 1) / 1]
[42]:
pprint(eigenvalues)
[18.084111467840707,
 9.984076405601847,
 3.540687229805949,
 2.655598108390125,
 1.596852701724399,
 1.5405241027955296,
 1.507713504116216,
 1.4744976712480349,
 1.467690539034742,
 1.4461994473306554]
[43]:
pcs.show(5, width=100)
s
scores
strarray<float64>
"HG00096"[1.22e-01,2.81e-01,-1.10e-01,-1.27e-01,6.68e-02,3.29e-03,-2.26e-02,4.26e-02,-9.30e-02,1.83e-01]
"HG00099"[1.14e-01,2.89e-01,-1.06e-01,-6.78e-02,4.72e-02,2.87e-02,5.28e-03,-1.57e-02,1.75e-02,-1.98e-02]
"HG00105"[1.09e-01,2.79e-01,-9.95e-02,-1.06e-01,8.79e-02,1.44e-02,2.80e-02,-3.38e-02,-1.08e-03,2.25e-02]
"HG00118"[1.26e-01,2.95e-01,-7.58e-02,-1.08e-01,1.76e-02,7.91e-03,-5.25e-02,3.05e-02,2.00e-02,-7.78e-02]
"HG00129"[1.06e-01,2.86e-01,-9.69e-02,-1.15e-01,1.03e-02,2.65e-02,-8.51e-02,2.49e-02,5.67e-02,-8.31e-03]

showing top 5 rows

Now that we’ve got principal components per sample, we may as well plot them! Human history exerts a strong effect in genetic datasets. Even with a 50MB sequencing dataset, we can recover the major human populations.

[44]:
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
[45]:
p = hl.plot.scatter(mt.scores[0],
                    mt.scores[1],
                    label=mt.pheno.SuperPopulation,
                    title='PCA', xlabel='PC1', ylabel='PC2')
show(p)
[Stage 161:>                                                        (0 + 1) / 1]

Now we can rerun our linear regression, controlling for sample sex and the first few principal components. We’ll do this with input variable the number of alternate alleles as before, and again with input variable the genotype dosage derived from the PL field.

[46]:
gwas = hl.linear_regression_rows(
    y=mt.pheno.CaffeineConsumption,
    x=mt.GT.n_alt_alleles(),
    covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
[Stage 166:>                                                        (0 + 1) / 1]

We’ll first make a Q-Q plot to assess inflation…

[47]:
p = hl.plot.qq(gwas.p_value)
show(p)

That’s more like it! This shape is indicative of a well-controlled (but not especially well-powered) study. And now for the Manhattan plot:

[48]:
p = hl.plot.manhattan(gwas.p_value)
show(p)

We have found a caffeine consumption locus! Now simply apply Hail’s Nature paper function to publish the result.

Just kidding, that function won’t land until Hail 1.0!

Rare variant analysis

Here we’ll demonstrate how one can use the expression language to group and count by any arbitrary properties in row and column fields. Hail also implements the sequence kernel association test (SKAT).

[49]:
entries = mt.entries()
results = (entries.group_by(pop = entries.pheno.SuperPopulation, chromosome = entries.locus.contig)
      .aggregate(n_het = hl.agg.count_where(entries.GT.is_het())))
[50]:
results.show()
[Stage 184:>                                                        (0 + 1) / 1]
pop
chromosome
n_het
strstrint64
"AFR""1"11039
"AFR""10"7123
"AFR""11"6777
"AFR""12"7016
"AFR""13"4650
"AFR""14"4262
"AFR""15"3847
"AFR""16"4564
"AFR""17"3607
"AFR""18"4133

showing top 10 rows

We use the MatrixTable.entries method to convert our matrix table to a table (with one row for each sample for each variant). In this representation, it is easy to aggregate over any fields we like, which is often the first step of rare variant analysis.

What if we want to group by minor allele frequency bin and hair color, and calculate the mean GQ?

[51]:
entries = entries.annotate(maf_bin = hl.if_else(entries.info.AF[0]<0.01, "< 1%",
                             hl.if_else(entries.info.AF[0]<0.05, "1%-5%", ">5%")))

results2 = (entries.group_by(af_bin = entries.maf_bin, purple_hair = entries.pheno.PurpleHair)
      .aggregate(mean_gq = hl.agg.stats(entries.GQ).mean,
                 mean_dp = hl.agg.stats(entries.DP).mean))
[52]:
results2.show()
[Stage 193:>                                                        (0 + 1) / 1]
af_bin
purple_hair
mean_gq
mean_dp
strboolfloat64float64
"1%-5%"False2.48e+017.43e+00
"1%-5%"True2.46e+017.47e+00
"< 1%"False2.35e+017.55e+00
"< 1%"True2.35e+017.53e+00
">5%"False3.70e+017.65e+00
">5%"True3.73e+017.70e+00

We’ve shown that it’s easy to aggregate by a couple of arbitrary statistics. This specific examples may not provide especially useful pieces of information, but this same pattern can be used to detect effects of rare variation:

  • Count the number of heterozygous genotypes per gene by functional category (synonymous, missense, or loss-of-function) to estimate per-gene functional constraint

  • Count the number of singleton loss-of-function mutations per gene in cases and controls to detect genes involved in disease

Epilogue

Congrats! You’ve reached the end of the first tutorial. To learn more about Hail’s API and functionality, take a look at the other tutorials. You can check out the Python API for documentation on additional Hail functions. If you use Hail for your own science, we’d love to hear from you on Zulip chat or the discussion forum.

For reference, here’s the full workflow to all tutorial endpoints combined into one cell.

[53]:
table = hl.import_table('data/1kg_annotations.txt', impute=True).key_by('Sample')

mt = hl.read_matrix_table('data/1kg.mt')
mt = mt.annotate_cols(pheno = table[mt.s])
mt = hl.sample_qc(mt)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
                        (mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
                        (mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)

eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)

mt = mt.annotate_cols(scores = pcs[mt.s].scores)
gwas = hl.linear_regression_rows(
    y=mt.pheno.CaffeineConsumption,
    x=mt.GT.n_alt_alleles(),
    covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
[Stage 310:>                                                        (0 + 1) / 1]
[ ]: