GWAS Tutorial

This notebook is designed to provide a broad overview of Hail’s functionality, with emphasis on the functionality to manipulate and query a genetic dataset. We walk through a genome-wide SNP association test, and demonstrate the need to control for confounding caused by population stratification.

In [1]:
import hail as hl
hl.init()
Running on Apache Spark version 2.2.0
SparkUI available at http://10.32.15.7:4040
Welcome to
     __  __     <>__
    / /_/ /__  __/ /
   / __  / _ `/ / /
  /_/ /_/\_,_/_/_/   version 0.2.5-0af07ea50218
LOGGING: writing to /hail/repo/hail/build/tmp/python/hail/docs/tutorials/hail-20181216-2243-0.2.5-0af07ea50218.log

If the above cell ran without error, we’re ready to go!

Before using Hail, we import some standard Python libraries for use throughout the notebook.

In [2]:
from bokeh.io import output_notebook, show
from pprint import pprint
output_notebook()
Loading BokehJS ...

Check for tutorial data or download if necessary

This cell downloads the necessary data if it isn’t already present.

In [3]:
hl.utils.get_1kg('data/')
2018-12-16 22:43:13 Hail: INFO: downloading 1KG VCF ...
  Source: https://storage.googleapis.com/hail-tutorial/1kg.vcf.bgz
2018-12-16 22:43:13 Hail: INFO: importing VCF and writing to matrix table...
2018-12-16 22:43:16 Hail: INFO: Coerced sorted dataset
2018-12-16 22:43:18 Hail: INFO: wrote matrix table with 10961 rows and 284 columns in 16 partitions to data/1kg.mt
2018-12-16 22:43:18 Hail: INFO: downloading 1KG annotations ...
  Source: https://storage.googleapis.com/hail-tutorial/1kg_annotations.txt
2018-12-16 22:43:18 Hail: INFO: downloading Ensembl gene annotations ...
  Source: https://storage.googleapis.com/hail-tutorial/ensembl_gene_annotations.txt
2018-12-16 22:43:18 Hail: INFO: Done!

Importing data from VCF

The data in a VCF file is naturally represented as a Hail MatrixTable. By first importing the VCF file and then writing the resulting MatrixTable in Hail’s native file format, all downstream operations on the VCF’s data will be MUCH faster.

This dataset was created by downsampling a public 1000 genomes VCF to about 20 MB.

In [4]:
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
2018-12-16 22:43:19 Hail: INFO: Coerced sorted dataset
2018-12-16 22:43:22 Hail: INFO: wrote matrix table with 10961 rows and 284 columns in 2 partitions to data/1kg.mt

Next we read the written file, assigning the variable “mt” (for __m__atrix table).

In [5]:
mt = hl.read_matrix_table('data/1kg.mt')

Getting to know our data

It’s important to have easy ways to slice, dice, query, and summarize a dataset. Some of this functionality is demonstrated below.

The rows method can be used to get a table with all the row fields in our MatrixTable.

We can use rows along with select to pull out 5 variants. The select method takes either a string refering to a field name in the table, or a Hail Expression. Here, we leave the arguments blank to keep only the row key fields, locus and alleles.

Use the show method to display the variants.

In [6]:
mt.rows().select().show(5)
+---------------+------------+
| locus         | alleles    |
+---------------+------------+
| locus<GRCh37> | array<str> |
+---------------+------------+
| 1:904165      | ["G","A"]  |
| 1:909917      | ["G","A"]  |
| 1:986963      | ["C","T"]  |
| 1:1563691     | ["T","G"]  |
| 1:1707740     | ["T","G"]  |
+---------------+------------+
showing top 5 rows

Alternatively:

In [7]:
mt.row_key.show(5)
+---------------+------------+
| locus         | alleles    |
+---------------+------------+
| locus<GRCh37> | array<str> |
+---------------+------------+
| 1:904165      | ["G","A"]  |
| 1:909917      | ["G","A"]  |
| 1:986963      | ["C","T"]  |
| 1:1563691     | ["T","G"]  |
| 1:1707740     | ["T","G"]  |
+---------------+------------+
showing top 5 rows

Here is how to peek at the first few sample IDs:

In [8]:
mt.s.show(5)
+-----------+
| s         |
+-----------+
| str       |
+-----------+
| "HG00096" |
| "HG00099" |
| "HG00105" |
| "HG00118" |
| "HG00129" |
+-----------+
showing top 5 rows

To look at the first few genotype calls, we can use entries along with select and take. The take method collects the first n rows into a list. Alternatively, we can use the show method, which prints the first n rows to the console in a table format.

Try changing take to show in the cell below.

In [9]:
mt.entry.take(5)
Out[9]:
[Struct(GT=Call(alleles=[0, 0], phased=False), AD=[4, 0], DP=4, GQ=12, PL=[0, 12, 147]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[8, 0], DP=8, GQ=24, PL=[0, 24, 315]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[8, 0], DP=8, GQ=23, PL=[0, 23, 230]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[7, 0], DP=7, GQ=21, PL=[0, 21, 270]),
 Struct(GT=Call(alleles=[0, 0], phased=False), AD=[5, 0], DP=5, GQ=15, PL=[0, 15, 205])]

Adding column fields

A Hail MatrixTable can have any number of row fields and column fields for storing data associated with each row and column. Annotations are usually a critical part of any genetic study. Column fields are where you’ll store information about sample phenotypes, ancestry, sex, and covariates. Row fields can be used to store information like gene membership and functional impact for use in QC or analysis.

In this tutorial, we demonstrate how to take a text file and use it to annotate the columns in a MatrixTable.

The file provided contains the sample ID, the population and “super-population” designations, the sample sex, and two simulated phenotypes (one binary, one discrete).

This file can be imported into Hail with import_table. This function produces a Table object. Think of this as a Pandas or R dataframe that isn’t limited by the memory on your machine – behind the scenes, it’s distributed with Spark.

In [10]:
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
         .key_by('Sample'))
2018-12-16 22:43:23 Hail: INFO: Reading table to impute column types
2018-12-16 22:43:24 Hail: INFO: Finished type imputation
  Loading column 'Sample' as type 'str' (imputed)
  Loading column 'Population' as type 'str' (imputed)
  Loading column 'SuperPopulation' as type 'str' (imputed)
  Loading column 'isFemale' as type 'bool' (imputed)
  Loading column 'PurpleHair' as type 'bool' (imputed)
  Loading column 'CaffeineConsumption' as type 'int32' (imputed)

A good way to peek at the structure of a Table is to look at its schema.

In [11]:
table.describe()
----------------------------------------
Global fields:
    None
----------------------------------------
Row fields:
    'Sample': str
    'Population': str
    'SuperPopulation': str
    'isFemale': bool
    'PurpleHair': bool
    'CaffeineConsumption': int32
----------------------------------------
Key: ['Sample']
----------------------------------------

To peek at the first few values, use the show method:

In [12]:
table.show(width=100)
+-----------+------------+-----------------+----------+------------+---------------------+
| Sample    | Population | SuperPopulation | isFemale | PurpleHair | CaffeineConsumption |
+-----------+------------+-----------------+----------+------------+---------------------+
| str       | str        | str             | bool     | bool       |               int32 |
+-----------+------------+-----------------+----------+------------+---------------------+
| "HG00096" | "GBR"      | "EUR"           | false    | false      |                   4 |
| "HG00097" | "GBR"      | "EUR"           | true     | true       |                   4 |
| "HG00098" | "GBR"      | "EUR"           | false    | false      |                   5 |
| "HG00099" | "GBR"      | "EUR"           | true     | false      |                   4 |
| "HG00100" | "GBR"      | "EUR"           | true     | false      |                   5 |
| "HG00101" | "GBR"      | "EUR"           | false    | true       |                   1 |
| "HG00102" | "GBR"      | "EUR"           | true     | true       |                   6 |
| "HG00103" | "GBR"      | "EUR"           | false    | true       |                   5 |
| "HG00104" | "GBR"      | "EUR"           | true     | false      |                   5 |
| "HG00105" | "GBR"      | "EUR"           | false    | false      |                   4 |
+-----------+------------+-----------------+----------+------------+---------------------+
showing top 10 rows

2018-12-16 22:43:24 Hail: INFO: Coerced sorted dataset

Now we’ll use this table to add sample annotations to our dataset, storing the annotations in column fields in our MatrixTable. First, we’ll print the existing column schema:

In [13]:
print(mt.col.dtype)
struct{s: str}

We use the annotate_cols method to join the table with the MatrixTable containing our dataset.

In [14]:
mt = mt.annotate_cols(pheno = table[mt.s])
In [15]:
mt.col.describe()
--------------------------------------------------------
Type:
    struct {
        s: str,
        pheno: struct {
            Population: str,
            SuperPopulation: str,
            isFemale: bool,
            PurpleHair: bool,
            CaffeineConsumption: int32
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7efcc0a9ce48>
Index:
    ['column']
--------------------------------------------------------

Query functions and the Hail Expression Language

Hail has a number of useful query functions that can be used for gathering statistics on our dataset. These query functions take Hail Expressions as arguments.

We will start by looking at some statistics of the information in our table. The aggregate method can be used to aggregate over rows of the table.

counter is an aggregation function that counts the number of occurrences of each unique element. We can use this to pull out the population distribution by passing in a Hail Expression for the field that we want to count by.

In [16]:
pprint(table.aggregate(hl.agg.counter(table.SuperPopulation)))
{'AFR': 1018, 'AMR': 535, 'EAS': 617, 'EUR': 669, 'SAS': 661}
2018-12-16 22:43:24 Hail: INFO: Coerced sorted dataset

stats is an aggregation function that produces some useful statistics about numeric collections. We can use this to see the distribution of the CaffeineConsumption phenotype.

In [17]:
pprint(table.aggregate(hl.agg.stats(table.CaffeineConsumption)))
2018-12-16 22:43:24 Hail: INFO: Coerced sorted dataset
{'max': 10.0,
 'mean': 3.983714285714278,
 'min': -1.0,
 'n': 3500,
 'stdev': 1.7021055628070707,
 'sum': 13942.999999999973}

However, these metrics aren’t perfectly representative of the samples in our dataset. Here’s why:

In [18]:
table.count()
Out[18]:
3500
In [19]:
mt.count_cols()
Out[19]:
284

Since there are fewer samples in our dataset than in the full thousand genomes cohort, we need to look at annotations on the dataset. We can use aggregate_cols to get the metrics for only the samples in our dataset.

In [20]:
mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))
2018-12-16 22:43:25 Hail: INFO: Coerced sorted dataset
2018-12-16 22:43:25 Hail: INFO: Coerced sorted dataset
Out[20]:
{'AFR': 76, 'EAS': 72, 'AMR': 34, 'SAS': 55, 'EUR': 47}
In [21]:
pprint(mt.aggregate_cols(hl.agg.stats(mt.pheno.CaffeineConsumption)))
2018-12-16 22:43:25 Hail: INFO: Coerced sorted dataset
2018-12-16 22:43:25 Hail: INFO: Coerced sorted dataset
{'max': 9.0,
 'mean': 4.415492957746479,
 'min': 0.0,
 'n': 284,
 'stdev': 1.577763427465917,
 'sum': 1254.0}

The functionality demonstrated in the last few cells isn’t anything especially new: it’s certainly not difficult to ask these questions with Pandas or R dataframes, or even Unix tools like awk. But Hail can use the same interfaces and query language to analyze collections that are much larger, like the set of variants.

Here we calculate the counts of each of the 12 possible unique SNPs (4 choices for the reference base * 3 choices for the alternate base).

To do this, we need to get the alternate allele of each variant and then count the occurences of each unique ref/alt pair. This can be done with Hail’s counter function.

In [22]:
snp_counts = mt.aggregate_rows(hl.agg.counter(hl.Struct(ref=mt.alleles[0], alt=mt.alleles[1])))
pprint(snp_counts)
{Struct(ref='T', alt='G'): 468,
 Struct(ref='T', alt='C'): 1879,
 Struct(ref='A', alt='G'): 1944,
 Struct(ref='A', alt='C'): 454,
 Struct(ref='A', alt='T'): 76,
 Struct(ref='C', alt='A'): 496,
 Struct(ref='C', alt='G'): 150,
 Struct(ref='G', alt='A'): 2387,
 Struct(ref='G', alt='T'): 480,
 Struct(ref='C', alt='T'): 2436,
 Struct(ref='G', alt='C'): 112,
 Struct(ref='T', alt='A'): 79}

We can list the counts in descending order using Python’s Counter class.

In [23]:
from collections import Counter
counts = Counter(snp_counts)
counts.most_common()
Out[23]:
[(Struct(ref='C', alt='T'), 2436),
 (Struct(ref='G', alt='A'), 2387),
 (Struct(ref='A', alt='G'), 1944),
 (Struct(ref='T', alt='C'), 1879),
 (Struct(ref='C', alt='A'), 496),
 (Struct(ref='G', alt='T'), 480),
 (Struct(ref='T', alt='G'), 468),
 (Struct(ref='A', alt='C'), 454),
 (Struct(ref='C', alt='G'), 150),
 (Struct(ref='G', alt='C'), 112),
 (Struct(ref='T', alt='A'), 79),
 (Struct(ref='A', alt='T'), 76)]

It’s nice to see that we can actually uncover something biological from this small dataset: we see that these frequencies come in pairs. C/T and G/A are actually the same mutation, just viewed from from opposite strands. Likewise, T/A and A/T are the same mutation on opposite strands. There’s a 30x difference between the frequency of C/T and A/T SNPs. Why?

The same Python, R, and Unix tools could do this work as well, but we’re starting to hit a wall - the latest gnomAD release publishes about 250 million variants, and that won’t fit in memory on a single computer.

What about genotypes? Hail can query the collection of all genotypes in the dataset, and this is getting large even for our tiny dataset. Our 284 samples and 10,000 variants produce 10 million unique genotypes. The gnomAD dataset has about 5 trillion unique genotypes.

Hail plotting functions allow Hail fields as arguments, so we can pass in the DP field directly here. If the range and bins arguments are not set, this function will compute the range based on minimum and maximum values of the field and use the default 50 bins.

In [24]:
p = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP')
show(p)

Quality Control

QC is where analysts spend most of their time with sequencing datasets. QC is an iterative process, and is different for every project: there is no “push-button” solution for QC. Each time the Broad collects a new group of samples, it finds new batch effects. However, by practicing open science and discussing the QC process and decisions with others, we can establish a set of best practices as a community.

QC is entirely based on the ability to understand the properties of a dataset. Hail attempts to make this easier by providing the sample_qc function, which produces a set of useful metrics and stores them in a column field.

In [25]:
mt.col.describe()
--------------------------------------------------------
Type:
    struct {
        s: str,
        pheno: struct {
            Population: str,
            SuperPopulation: str,
            isFemale: bool,
            PurpleHair: bool,
            CaffeineConsumption: int32
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7efcc0a9ce48>
Index:
    ['column']
--------------------------------------------------------
In [26]:
mt = hl.sample_qc(mt)
In [27]:
mt.col.describe()
--------------------------------------------------------
Type:
    struct {
        s: str,
        pheno: struct {
            Population: str,
            SuperPopulation: str,
            isFemale: bool,
            PurpleHair: bool,
            CaffeineConsumption: int32
        },
        sample_qc: struct {
            dp_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            gq_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            call_rate: float64,
            n_called: int64,
            n_not_called: int64,
            n_hom_ref: int64,
            n_het: int64,
            n_hom_var: int64,
            n_non_ref: int64,
            n_singleton: int64,
            n_snp: int64,
            n_insertion: int64,
            n_deletion: int64,
            n_transition: int64,
            n_transversion: int64,
            n_star: int64,
            r_ti_tv: float64,
            r_het_hom_var: float64,
            r_insertion_deletion: float64
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7efcc0a27b00>
Index:
    ['column']
--------------------------------------------------------

Plotting the QC metrics is a good place to start.

In [28]:
p = hl.plot.histogram(mt.sample_qc.call_rate, range=(.88,1), legend='Call Rate')
show(p)
2018-12-16 22:43:27 Hail: INFO: Coerced sorted dataset
In [29]:
p = hl.plot.histogram(mt.sample_qc.gq_stats.mean, range=(10,70), legend='Mean Sample GQ')
show(p)
2018-12-16 22:43:28 Hail: INFO: Coerced sorted dataset

Often, these metrics are correlated.

In [30]:
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
2018-12-16 22:43:29 Hail: INFO: Coerced sorted dataset

Removing outliers from the dataset will generally improve association results. We can make arbitrary cutoffs and use them to filter:

In [31]:
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
print('After filter, %d/284 samples remain.' % mt.count_cols())
After filter, 250/284 samples remain.
2018-12-16 22:43:30 Hail: INFO: Coerced sorted dataset

Next is genotype QC. To start, we’ll print the post-sample-QC call rate. It’s actually gone up since the initial summary - dropping low-quality samples disproportionately removed missing genotypes.

In [32]:
call_rate = mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
print('before genotype QC, call rate is %.3f' % call_rate)
before genotype QC, call rate is 0.992

It’s a good idea to filter out genotypes where the reads aren’t where they should be: if we find a genotype called homozygous reference with >10% alternate reads, a genotype called homozygous alternate with >10% reference reads, or a genotype called heterozygote without a ref / alt balance near 1:1, it is likely to be an error.

In [33]:
ab = mt.AD[1] / hl.sum(mt.AD)

filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
                        (mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
                        (mt.GT.is_hom_var() & (ab >= 0.9)))

mt = mt.filter_entries(filter_condition_ab)
In [34]:
post_qc_call_rate = mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
print('post QC call rate is %.3f' % post_qc_call_rate)
post QC call rate is 0.955

Variant QC is a bit more of the same: we can use the variant_qc function to produce a variety of useful statistics, plot them, and filter.

In [35]:
mt = hl.variant_qc(mt)
2018-12-16 22:43:33 Hail: INFO: Coerced sorted dataset
In [36]:
mt.row.describe()
--------------------------------------------------------
Type:
    struct {
        locus: locus<GRCh37>,
        alleles: array<str>,
        rsid: str,
        qual: float64,
        filters: set<str>,
        info: struct {
            AC: array<int32>,
            AF: array<float64>,
            AN: int32,
            BaseQRankSum: float64,
            ClippingRankSum: float64,
            DP: int32,
            DS: bool,
            FS: float64,
            HaplotypeScore: float64,
            InbreedingCoeff: float64,
            MLEAC: array<int32>,
            MLEAF: array<float64>,
            MQ: float64,
            MQ0: int32,
            MQRankSum: float64,
            QD: float64,
            ReadPosRankSum: float64,
            set: str
        },
        variant_qc: struct {
            AC: array<int32>,
            AF: array<float64>,
            AN: int32,
            homozygote_count: array<int32>,
            dp_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            gq_stats: struct {
                mean: float64,
                stdev: float64,
                min: float64,
                max: float64
            },
            n_called: int64,
            n_not_called: int64,
            call_rate: float32,
            n_het: int64,
            n_non_ref: int64,
            het_freq_hwe: float64,
            p_value_hwe: float64
        }
    }
--------------------------------------------------------
Source:
    <hail.matrixtable.MatrixTable object at 0x7efcc09246a0>
Index:
    ['row']
--------------------------------------------------------

These statistics actually look pretty good: we don’t need to filter this dataset. Most datasets require thoughtful quality control, though. The filter_rows method can help!

Let’s do a GWAS!

First, we need to restrict to variants that are :

In [37]:
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
In [38]:
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6)
In [39]:
print('Samples: %d  Variants: %d' % (mt.count_cols(), mt.count_rows()))
2018-12-16 22:43:33 Hail: INFO: Coerced sorted dataset
Samples: 250  Variants: 7849

These filters removed about 15% of sites (we started with a bit over 10,000). This is NOT representative of most sequencing datasets! We have already downsampled the full thousand genomes dataset to include more common variants than we’d expect by chance.

In Hail, the association tests accept column fields for the sample phenotype and covariates. Since we’ve already got our phenotype of interest (caffeine consumption) in the dataset, we are good to go:

In [40]:
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
                                 x=mt.GT.n_alt_alleles(),
                                 covariates=[1.0])
gwas.row.describe()
2018-12-16 22:43:36 Hail: INFO: Coerced sorted dataset
2018-12-16 22:43:37 Hail: INFO: linear_regression_rows: running on 250 samples for 1 response variable y,
    with input variable x, and 1 additional covariate...
--------------------------------------------------------
Type:
    struct {
        locus: locus<GRCh37>,
        alleles: array<str>,
        n: int32,
        sum_x: float64,
        y_transpose_x: float64,
        beta: float64,
        standard_error: float64,
        t_stat: float64,
        p_value: float64
    }
--------------------------------------------------------
Source:
    <hail.table.Table object at 0x7efcc0955978>
Index:
    ['row']
--------------------------------------------------------

Looking at the bottom of the above printout, you can see the linear regression adds new row fields for the beta, standard error, t-statistic, and p-value.

Hail makes it easy to visualize results! Let’s make a Manhattan plot:

In [41]:
p = hl.plot.manhattan(gwas.p_value)
show(p)

This doesn’t look like much of a skyline. Let’s check whether our GWAS was well controlled using a Q-Q (quantile-quantile) plot.

In [42]:
p = hl.plot.qq(gwas.p_value)
show(p)
2018-12-16 22:43:40 Hail: INFO: Ordering unsorted dataset with network shuffle

Confounded!

The observed p-values drift away from the expectation immediately. Either every SNP in our dataset is causally linked to caffeine consumption (unlikely), or there’s a confounder.

We didn’t tell you, but sample ancestry was actually used to simulate this phenotype. This leads to a stratified distribution of the phenotype. The solution is to include ancestry as a covariate in our regression.

The linear_regression_rows function can also take column fields to use as covariates. We already annotated our samples with reported ancestry, but it is good to be skeptical of these labels due to human error. Genomes don’t have that problem! Instead of using reported ancestry, we will use genetic ancestry by including computed principal components in our model.

The pca function produces eigenvalues as a list and sample PCs as a Table, and can also produce variant loadings when asked. The hwe_normalized_pca function does the same, using HWE-normalized genotypes for the PCA.

In [43]:
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
2018-12-16 22:43:43 Hail: INFO: hwe_normalized_pca: running PCA using 7841 variants.
2018-12-16 22:43:46 Hail: INFO: pca: running PCA with 10 components...
2018-12-16 22:43:50 Hail: INFO: Coerced sorted dataset
In [44]:
pprint(eigenvalues)
[18.02377947184685,
 9.98894555036326,
 3.538312262917124,
 2.6577590783729943,
 1.5966032147658322,
 1.5416611649602319,
 1.5029872248781702,
 1.472081637853113,
 1.467818848733073,
 1.447783520133496]
In [45]:
pcs.show(5, width=100)
+-----------+
| s         |
+-----------+
| str       |
+-----------+
| "HG00096" |
| "HG00099" |
| "HG00105" |
| "HG00118" |
| "HG00129" |
+-----------+

+--------------------------------------------------------------------------------------------------+
| scores                                                                                           |
+--------------------------------------------------------------------------------------------------+
| array<float64>                                                                                   |
+--------------------------------------------------------------------------------------------------+
| [-1.22e-01,-2.81e-01,1.11e-01,-1.28e-01,6.81e-02,-3.72e-03,-2.66e-02,4.99e-03,-9.33e-02,-1.48... |
| [-1.13e-01,-2.90e-01,1.08e-01,-7.04e-02,4.20e-02,3.33e-02,1.61e-02,-1.15e-03,3.29e-02,2.33e-02]  |
| [-1.08e-01,-2.80e-01,1.03e-01,-1.05e-01,9.40e-02,1.27e-02,3.14e-02,3.08e-02,1.06e-02,-1.93e-02]  |
| [-1.25e-01,-2.98e-01,7.21e-02,-1.07e-01,2.89e-02,8.09e-03,-4.70e-02,-3.32e-02,-2.59e-04,8.49e... |
| [-1.07e-01,-2.87e-01,9.72e-02,-1.16e-01,1.38e-02,1.87e-02,-8.37e-02,-4.87e-02,3.73e-02,2.11e-02] |
+--------------------------------------------------------------------------------------------------+
showing top 5 rows

Now that we’ve got principal components per sample, we may as well plot them! Human history exerts a strong effect in genetic datasets. Even with a 50MB sequencing dataset, we can recover the major human populations.

In [46]:
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
In [47]:
p = hl.plot.scatter(mt.scores[0],
                    mt.scores[1],
                    label=mt.pheno.SuperPopulation,
                    title='PCA', xlabel='PC1', ylabel='PC2')
show(p)
2018-12-16 22:43:51 Hail: INFO: Coerced sorted dataset
2018-12-16 22:43:51 Hail: INFO: Coerced sorted dataset

Now we can rerun our linear regression, controlling for sample sex and the first few principal components. We’ll do this with input variable the number of alternate alleles as before, and again with input variable the genotype dosage derived from the PL field.

In [48]:
gwas = hl.linear_regression_rows(
    y=mt.pheno.CaffeineConsumption,
    x=mt.GT.n_alt_alleles(),
    covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
2018-12-16 22:43:52 Hail: INFO: Coerced sorted dataset
2018-12-16 22:43:53 Hail: INFO: linear_regression_rows: running on 250 samples for 1 response variable y,
    with input variable x, and 5 additional covariates...

We’ll first make a Q-Q plot to assess inflation…

In [49]:
p = hl.plot.qq(gwas.p_value)
show(p)
2018-12-16 22:43:55 Hail: INFO: Ordering unsorted dataset with network shuffle

That’s more like it! This shape is indicative of a well-controlled (but not especially well-powered) study. And now for the Manhattan plot:

In [50]:
p = hl.plot.manhattan(gwas.p_value)
show(p)