I am a fifth year PhD candidate studying computational biology and machine learning in the Paul G. Allen School of Computer Science and Engineering at the University of Washington with Su-In Lee. I love working on the application of machine learning to personalized health. Over the next few decades I believe automated data analysis will lead to significant advances in our understanding and treatment of health and disease. Before UW I had the opportunity to study graph theory at Colorado State University, and lead research projects at Numerica for several years.

My current work focuses on actionable machine learning in both basic biology and predictive medicine in the hospital. In both areas a combination of interpretable models and transparent visualizations of the learned structure is important. This has lead to our development of broadly applicable methods and tools for interpreting complex machine learning models.

## Open source software

• SHAP – A unified approach to explain the output of any machine learning model. Under certain assumptions it can be shown to be the optimal linear explanation of any model’s prediction. It includes an implementation of an exact polynomial time algorithm for tree models such as random forests or gradient boosted trees, making it particularly useful for these types of models.
• ChromNet.jl – A network learning method that ingests BAM/BED files and other pre-processed data bundles (such as the one provided for all human ENCODE ChIP-seq data).

For a full list of open source packages see GitHub

## Websites

• ChromNet – An online network visualization of the chromatin network estimated from ENCODE ChIP-seq data, or custom network users upload.

## Publications

### Previous work

1. Eddie Herman says:

Hi Scott,
Can you direct me towards the supplementary proof of Theorem 2 up to 10 dimensions in your paper:
“
S. Lundberg, S. Lee “A unified approach to interpreting model predictions,” NIPS 2017 (selected for oral presentation)
“
It is not attached to the arxiv paper.

Thanks,
Eddie Herman

2. Eitan Anzenberg says:

Hi Scott, we met at NIPS 2017. I have a question about SHAP. It is possible to run the model-agnostic SHAP without a background sample?

Good question. The issue is that since SHAP values rely on conditional expectations there needs to be some definition of the background input feature distribution, even if that is just a single reference input that is defined by the user. Without any definition of the input feature distribution it would be impossible to know if a feature value was increasing or decreasing the model output (since increasing and decreasing are always relative to something else).

3. Zihao Zhou says:

Hello,

I saw your post Interpretable Machine Learning with XGBoost online. I am still quite confused about Shap contribution dependence plot. I plot a similar plot in R as well. I just have a hard time understanding change in log odd. It feels like partial dependence plot, but it is not quite the same.

Thanks
Zihao Zhou

Hey! Would you mind posting this question as a github issue? I am trying to keep all the discussion in one place 🙂

4. Ujjwal says:

Hi Scott

Can you provide an explanation of the Tree SHAP algorithm where the time complexity is reduced.

Thanks

If you could highlight what is unclear about the description in the arXiv paper that would be helpful. You can also see a python implementation on github meant to make it easy to play with.

5. Olive Gu says:

Hi Scott,

I got a question when I was reading your paper “Consistent Individualized Feature Attribution for Tree Ensembles”. I am trying to understand how the E[f(x)|x_s] is defined. In the text part, you define E[f(x)|x_s] by “the expected value of the function conditioned on a subset S of the input features”. Based on this sentence, I assume E[f(x)|x_s] is the prediction mean of the observations in the training sample that match the conditioning set S. If I understood it correctly, then how do you calculate E[f(x)|x_s] for an instance whose x_s has never appeared in the training data set?