Hail is an open-source, scalable framework for exploring and analyzing genomic data.
The Hail project began in Fall 2015 to empower the worldwide genetics community to harness the flood of genomes to discover the biology of human disease. Since then, Hail has expanded to enable analysis of large-scale datasets beyond the field of genomics.
We’re running an introductory Hail workshop in Boston on October 31, hosted by the Harvard School of Public Health. Reserve a spot here.
Here are two examples of projects powered by Hail:
- The gnomAD team uses Hail as its core analysis platform. gnomAD is among the most comprehensive catalogues of human genetic variation in the world, and one of the largest genetic datasets. Analysis results are shared publicly in Hail format and have had sweeping impact on biomedical research and the clinical diagnosis of genetic disorders.
- The Neale Lab at the Broad Institute used Hail to perform QC and stratified association analysis of 4203 phenotypes at each of 13M variants in 361,194 individuals from the UK Biobank in about a day. Results and code are here and tweeted daily by the GWASbot.
For genomics applications, Hail can:
- flexibly import and export to a variety of data and annotation formats, including VCF, BGEN and PLINK
- generate variant annotations like call rate, Hardy-Weinberg equilibrium p-value, and population-specific allele count; and import annotations in parallel through annotation datasets, VEP, and Nirvana
- generate sample annotations like mean depth, imputed sex, and TiTv ratio
- generate new annotations from existing ones as well as genotypes, and use these to filter samples, variants, and genotypes
- find Mendelian violations in trios, prune variants in linkage disequilibrium, analyze genetic similarity between samples, and compute sample scores and variant loadings using PCA
- perform variant, gene-burden and eQTL association analyses using linear, logistic, Poisson, and linear mixed regression, and estimate heritability
- lots more! Check out some of the new features in Hail 0.2.
Hail’s functionality is exposed through Python and backed by distributed algorithms built on top of Apache Spark to efficiently analyze gigabyte-scale data on a laptop or terabyte-scale data on a cluster.
Users can script pipelines or explore data interactively in Jupyter notebooks that combine Hail’s methods, PySpark’s scalable SQL and machine learning algorithms, and Python libraries like pandas, scikit-learn and inline plotting.
To learn more, you can view our talks at Spark Summit East 2017 and Spark Summit West 2017.
To get started using Hail:
- install Hail 0.2 using the instructions in Installation
- follow the Tutorials for examples of how to use Hail
- read the Hailpedia for a broad introduction to Hail
- check out the Python API for detailed information on the programming interface
There are many ways to get in touch with the Hail team if you need help using Hail, or if you would like to suggest improvements or features. We also love to hear from new users about how they are using Hail.
Hail uses a continuous deployment approach to software development, which means we frequently add new features. We update users about changes to Hail via the Discussion Forum. We recommend creating an account on the Discussion Forum so that you can subscribe to these updates.
Hail is committed to open-source development. Our Github repo is publicly visible. If you’d like to contribute to the development of methods or infrastructure, please:
The Hail team is embedded in the Neale lab at the Stanley Center for Psychiatric Research of the Broad Institute of MIT and Harvard and the Analytic and Translational Genetics Unit of Massachusetts General Hospital.
Contact the Hail team at
Follow Hail on Twitter @hailgenetics.
If you use Hail for published work, please cite the software:
- Hail, https://github.com/hail-is/hail
We would like to thank Zulip for supporting open-source by providing free hosting, and YourKit, LLC for generously providing free licenses for YourKit Java Profiler for open-source development.