headshot_small I am a fifth year PhD candidate studying computational biology and machine learning in the Paul G. Allen School of Computer Science and Engineering at the University of Washington with Su-In Lee. I love working on the application of machine learning to personalized health. Over the next few decades I believe automated data analysis will lead to significant advances in our understanding and treatment of health and disease. Before UW I had the opportunity to study graph theory at Colorado State University, and lead research projects at Numerica for several years.

My current work focuses on actionable machine learning in both basic biology and predictive medicine in the hospital. In both areas a combination of interpretable models and transparent visualizations of the learned structure is important. This has lead to our development of broadly applicable methods and tools for interpreting complex machine learning models.


Open source software

  • SHAP – A unified approach to explain the output of any machine learning model. Under certain assumptions it can be shown to be the optimal linear explanation of any model’s prediction. It includes an implementation of the first exact polynomial time algorithm for tree models such as random forests or gradient boosted trees, making it particularly useful for these types of models.
  • ChromNet.jl – A network learning method that ingests BAM/BED files and other pre-processed data bundles (such as the one provided for all human ENCODE ChIP-seq data).

For a full list of open source packages see GitHub



  • ChromNet – An online network visualization of the chromatin network estimated from ENCODE ChIP-seq data, or custom network users upload.



Current work


Previous work

6 thoughts on “About

  1. Hi Scott,
    Can you direct me towards the supplementary proof of Theorem 2 up to 10 dimensions in your paper:
    S. Lundberg, S. Lee “A unified approach to interpreting model predictions,” NIPS 2017 (selected for oral presentation)
    It is not attached to the arxiv paper.

    Eddie Herman

  2. Hi Scott, we met at NIPS 2017. I have a question about SHAP. It is possible to run the model-agnostic SHAP without a background sample?

    1. Good question. The issue is that since SHAP values rely on conditional expectations there needs to be some definition of the background input feature distribution, even if that is just a single reference input that is defined by the user. Without any definition of the input feature distribution it would be impossible to know if a feature value was increasing or decreasing the model output (since increasing and decreasing are always relative to something else).

  3. Hello,

    I saw your post Interpretable Machine Learning with XGBoost online. I am still quite confused about Shap contribution dependence plot. I plot a similar plot in R as well. I just have a hard time understanding change in log odd. It feels like partial dependence plot, but it is not quite the same.

    Zihao Zhou

    1. Hey! Would you mind posting this question as a github issue? I am trying to keep all the discussion in one place 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *