Quantile regression

When we are performing regression analysis using complicated predictive models such as neural networks, knowing how certain the model is is highly valuable in many cases, for instance when the applications are within the health sector. The bootstrap prediction intervals that we covered last time requires us to train the... [Read More]

Bootstrapping prediction intervals

Continuing from where we left off, in this post I will discuss a general way of producing accurate prediction intervals for all machine learning models that are in use today. The algorithm for producing these intervals uses bootstrapping and was introduced in Kumar and Srivastava (2012). [Read More]

Parametric prediction intervals

One aspect of machine learning that does not seem to attract much attention is quantifying the uncertainty of our models’ predictions. In classification tasks we can partially remedy this by outputting conditional probabilities rather than boolean values, but what if the model is outputting 52%? Is that a clear-cut positive... [Read More]

Evaluating confidence

This post will be the first post where I’m delving into quantifying uncertainty of statistical models. We start with the classical confidence interval, used to estimate uncertainty of statistics about the data that we are working with. Computing confidence intervals can be done using normal theory, which is the classical... [Read More]

Scholarly

Categorising scientific papers

I recently finished Scholarly, a long-standing side project of mine, which consists of predicting the category of a given title and abstract of a scientific paper. More precisely, I am predicting the ~150 subject classification categories from the arXiv preprint server, and have trained the model on all papers on... [Read More]