I am a fifth year PhD candidate studying computational biology and machine learning in the Paul G. Allen School of Computer Science and Engineering at the University of Washington with Su-In Lee. I love working on the application of machine learning to personalized health. Over the next few decades I believe automated data analysis will lead to significant advances in our understanding and treatment of health and disease. Before UW I had the opportunity to study graph theory at Colorado State University, and lead research projects at Numerica for several years.

My current work focuses on actionable machine learning in both basic biology and predictive medicine in the hospital. In both areas a combination of interpretable models and transparent visualizations of the learned structure is important. This has lead to our development of broadly applicable methods and tools for interpreting complex machine learning models.

## Open source software

- SHAP – A unified approach to explain the output of any machine learning model. Under certain assumptions it can be shown to be the optimal linear explanation of any model’s prediction. It includes an implementation of an exact polynomial time algorithm for tree models such as random forests or gradient boosted trees, making it particularly useful for these types of models.
- ChromNet.jl – A network learning method that ingests BAM/BED files and other pre-processed data bundles (such as the one provided for all human ENCODE ChIP-seq data).

For a full list of open source packages see GitHub

## Websites

- ChromNet – An online network visualization of the chromatin network estimated from ENCODE ChIP-seq data, or custom network users upload.

## Publications

### Current work

**S. Lundberg**, B. Nair, M. Vavilala, M. Horibe, M. Eisses, T. Adams, D. Liston, D. Low, S. Newman, J. Kim, and S. Lee “Explainable machine-learning predictions for the prevention of hypoxaemia during surgery,” Nature Biomedical Engineering volume 2, pages 749–760 (2018).*(Selected to be the cover article)*(free ShareIt link)**S. Lundberg**, G. Erion, S. Lee “Consistent Individualized Feature Attribution for Tree Ensembles,” pre-print.- S. Lee, S. Celik, B. Logsdon,
**S. Lundberg**, T. Martins, V. Oehler, E. Estey, C. Miller, S. Chien, J. Dai, and A. Saxena “A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia,” in Nature communications, 2018. **S. Lundberg**, S. Lee “A unified approach to interpreting model predictions,” NIPS 2017 (selected for oral presentation) (3 min overview video) (errata).- G. Erion, H. Chen,
**S. Lundberg**, and S. Lee. “Anesthesiologist-level forecasting of hypoxemia with only SpO2 data using deep learning,” in Neural Information Processing Systems (NIPS) 2017 Workshop ML4H: Machine Learning for Health - H. Chen,
**S. Lundberg**, and S. Lee. “Hybrid Gradient Boosting Trees and Neural Networks for Forecasting Operating Room Data,” in Neural Information Processing Systems (NIPS) 2017 Workshop ML4H: Machine Learning for Health - N. Hiranuma,
**S. Lundberg**, and S. Lee. “CloudControl: Leveraging many public ChIP-seq control experiments to better remove background noise,” in Proceedings of the 7th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, ACM, 2016. **S. Lundberg**, S. Lee “An unexpected unity among methods for interpreting model predictions,” presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems. (best paper award)**S. Lundberg**, W. Tu, B. Raught, L. Penn, M. Hoffman, S. Lee “ChromNet: Learning the human chromatin network from all ENCODE ChIP-seq data,” in Genome Biology, 2016. (F1000Prime recommended)

### Previous work

**S. Lundberg**, C. Calderon, and R. Paffenroth, “Detecting Clustered Chem/Bio Signals in Noisy Sensor Feeds Using Adaptive Fusion,” in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, vol. 8393, p. 1, 2012.- R. Nong, R. Paffenroth,
**S. Lundberg**, and W. Leed, “Method for Lossy Compression of Point Clouds with Pointwise Error Constraints,” Patent application filed 2012. - C. Calderon, A. Jones,
**S. Lundberg**, and R. Paffenroth, “A data-driven approach for processing heterogeneous categorical sensor signals,” Proceedings of SPIE, vol. 8137, p. 813704, 2011. - B. Joeris,
**S. Lundberg**, and R. McConnell, “O (mlogn) split decomposition of strongly-connected graphs,” Discrete Applied Mathematics, vol. 158, no. 7, pp. 779–799, 2010. **S. Lundberg**, R. Paffenroth, and J. Yosinski, “Analysis of CBRN sensor fusion methods,” in Information Fusion (FUSION), 2010 13th Conference on, pp. 1–8, IEEE, 2010.- A. Curtis, C. Izurieta, B. Joeris,
**S. Lundberg**, and R. McConnell, “An implicit representation of chordal comparability graphs in linear time,” Discrete Applied Mathematics, vol. 158, no. 8, pp. 869–875, 2010. **S. Lundberg**, R. Paffenroth, and J. Yosinski, “Algorithms for Distributed Chemical Sensor Fusion,” Proceedings of SPIE, vol. 7698, p. 769806, 2010.**S. Lundberg**, “O (m log n) split decomposition of directed graphs,” Master’s thesis, Colorado State University, 2008.- D. Moore, J. Stevens,
**S. Lundberg**, and B. Draper, “Top down image segmentation using congealing and graph-cut,” in Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, pp. 1–4, IEEE, 2008.

Hi Scott,

Can you direct me towards the supplementary proof of Theorem 2 up to 10 dimensions in your paper:

“`

S. Lundberg, S. Lee “A unified approach to interpreting model predictions,” NIPS 2017 (selected for oral presentation)

“`

It is not attached to the arxiv paper.

Thanks,

Eddie Herman

Hey! Sorry I had gotten so many spam comments I only just now realized was missing the real ones.

If it still matters you can find the supplement at:

http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions

Hi Scott, we met at NIPS 2017. I have a question about SHAP. It is possible to run the model-agnostic SHAP without a background sample?

Good question. The issue is that since SHAP values rely on conditional expectations there needs to be some definition of the background input feature distribution, even if that is just a single reference input that is defined by the user. Without any definition of the input feature distribution it would be impossible to know if a feature value was increasing or decreasing the model output (since increasing and decreasing are always relative to something else).

Hello,

I saw your post Interpretable Machine Learning with XGBoost online. I am still quite confused about Shap contribution dependence plot. I plot a similar plot in R as well. I just have a hard time understanding change in log odd. It feels like partial dependence plot, but it is not quite the same.

Thanks

Zihao Zhou

Hey! Would you mind posting this question as a github issue? I am trying to keep all the discussion in one place 🙂

Hi Scott

Can you provide an explanation of the Tree SHAP algorithm where the time complexity is reduced.

Thanks

If you could highlight what is unclear about the description in the arXiv paper that would be helpful. You can also see a python implementation on github meant to make it easy to play with.

Hi Scott,

I got a question when I was reading your paper “Consistent Individualized Feature Attribution for Tree Ensembles”. I am trying to understand how the E[f(x)|x_s] is defined. In the text part, you define E[f(x)|x_s] by “the expected value of the function conditioned on a subset S of the input features”. Based on this sentence, I assume E[f(x)|x_s] is the prediction mean of the observations in the training sample that match the conditioning set S. If I understood it correctly, then how do you calculate E[f(x)|x_s] for an instance whose x_s has never appeared in the training data set?

Hey! Could you post this as a github issue? That’s where I have been keeping all the discussions on SHAP. Thanks.

Hey Scott,

I have some question regarding SHAP and its calculation. Specifically, is there any way I can calculate the feature contribution for each individual observation’s prediction? For example, which feature makes some sample’s prediction very high or very low, etc.

If you could send me an email, I could send more details. Appreciate your response.

Hey! Could you post this as a github issue and reference one of the examples there?

Hi Scott,

I have a question about your Corollary 1 (Linear SHAP) on page 6 of your paper “A Unified Approach to Interpreting Model Predictions.” Is the formula phi(i) correct? is phi(i) the SHAP value for an individual observation in that case? if it is, shouldn’t the notation be: phi(i) (f, x) = w(j) (x(i) – E[x(j)])? or was phi(i) really phi(j), that is, the SHAP value for feature j? then in that case the notation is phi(j) (f,x) = w(j) (x(j) – E [x(j)]) with each element of the formula being a vector, not an individual observation?

Thank you,

Ariadna Martínez

Missed this comment when you first sent it. Yes, thanks! That is a mistake which I have now noted in an errata doc linked to from my paper list.

Hi Scott,

I am handling a data with 5000 features and the shap values are all zeros. With 1000 features I get nice results.

Do you know why it happens? I am using Kernel SHAP.

Thanks,

Liat

Missed this on my WP comment section, my best guess that the L1 penalty is not set correctly. By default it uses BIC, but with 5k features it might just be getting turned up too high. I would try setting it manually and/or increasing the number of samples.

Hello Scott,

I have been wondering , is there any way to use SHAP’s DeepExplainer without Keras API.

Since, I built a custom auto-encoder architecture with TensorFlow, only types that I can provide are either the or .

In summary, I cannot provide the expected “model” type.

Is it too early to expect workarounds for alternative approaches? :))

Just getting to some page comments I found (late) here on WP. The short answer is yes, yes pass a pair of TF tensors, the first being the input and the second being the output. If you have more questions post a github issue.

Hi Scott,

I am a BME student in Case Western Reserve University, and I am trying to do a journal review on your latest work, “Explainable machine-learning predictions for the prevention of hypoxaemia during surgery,” in one of my health informatics class, but the access for the publication is still not allowed. I am wondering if I could gain access to the pdf file for this article. It would be greatly appreciated (and also on the behalf of my peers) if I could receive the published pdf version of this work from you! Thanks..

-Keewon

Hi Keewon! These page comments got buried somewhere, so I apologize for the slow reply. In case you still need it I posted a shareit link on my homepage that gives free access to the article.