Corpus ID: 685757

Interpretable Low-Dimensional Regression via Data-Adaptive Smoothing

@article{Tansey2017InterpretableLR,
  title={Interpretable Low-Dimensional Regression via Data-Adaptive Smoothing},
  author={Wesley Tansey and Jesse Thomason and J. Scott},
  journal={arXiv: Machine Learning},
  year={2017}
}
  • Wesley Tansey, Jesse Thomason, J. Scott
  • Published 2017
  • Mathematics
  • arXiv: Machine Learning
  • We consider the problem of estimating a regression function in the common situation where the number of features is small, where interpretability of the model is a high priority, and where simple linear or additive models fail to provide adequate performance. To address this problem, we present Maximum Variance Total Variation denoising (MVTV), an approach that is conceptually related both to CART and to the more recent CRISP algorithm, a state-of-the-art alternative method for interpretable… CONTINUE READING
    3 Citations

    Figures and Tables from this paper

    A Categorisation of Post-hoc Explanations for Predictive Models
    • PDF
    Fibres of Failure: Classifying errors in predictive processes
    • 2
    • PDF

    References

    SHOWING 1-10 OF 15 REFERENCES
    Convex Regression with Interpretable Sharp Partitions
    • 11
    • PDF
    Monotonic Calibrated Interpolated Look-Up Tables
    • 65
    • PDF
    Sparsity and smoothness via the fused lasso
    • 2,039
    • PDF
    Fast Newton-type Methods for Total Variation Regularization
    • 77
    • PDF
    Modular Proximal Optimization for Multidimensional Total-Variation Regularization
    • 75
    • PDF
    The solution path of the generalized lasso
    • 567
    • PDF
    Trend Filtering on Graphs
    • 139
    • PDF
    Multiscale Spatial Density Smoothing: An Application to Large-Scale Radiological Survey and Anomaly Detection
    • 17
    • PDF