Fletcher Christensen is an assistant professor of statistics in the Department of Mathematics and Statistics at the University of New Mexico. On this page, you will find information on his research interests and work, the courses he has taught, and some of his personal hobbies. If you want to contact Dr. Christensen, please use the email button provided in his profile.
I am adapting my dissertation for publication as 2-3 papers in peer-reviewed journals. The first will focus on my method for marginalizing generalized linear mixed models with normal random effects and the application of this method to calculations for model selection criteria.
My research interests focus on the development and application of Bayesian methods. In particular, I work on Bayesian nonparametrics and Bayesian model selection. I am also interested in foundational issues such as decision theory and the philosophical roots of statistical methods. My background in cognitive sciences and biostatistics makes these two of my preferred areas for applied collaboration.
Nonparametric and semiparametric methods are used to allow for additional flexibility in probabilistic modeling. Unlike traditional Bayesian methods, which use parametric univariate and mulitvariate probability distributions to characterize stochastic quantities of interest, nonparametric Bayesian methods involve assigning probabilities to elements in the space of probability distributions itself. Some common nonparametric Bayesian methods involve Dirichlet processes and Polya trees, which provide probabilities for the spaces of discrete and continuous probability distributions respectively.
George Box famously said, "All models are wrong, but some are useful." Model selection determines which among a collection of candidate models are most useful for a particular scientific task (e.g. hypothesis generation, prediction). Research on Bayesian model selection takes many forms, from the analysis of Bayesian model selection criteria to the careful use of informative prior distributions to learn more about posterior probabilities for various models.
Decision theory is the one-player version of game theory. In a given situation, it relates the actions one can take to the unknown states of nature that will determine the consequences of taking that action. The discipline of decision theory attempts to choose the optimal action under uncertainty, given one's valuation of the consequences that may ensue and one's beliefs about the probability that those consequences will eventuate. Decision theory underlies a great deal of the development of modern statistics.
Good statistical thinking depends on our understanding of epistemology and the philosophical roots of scientific inquiry. How do we know what we know? What constitutes knowledge? Bayesian notions of probability and learning provide the best solution to Hume's problem of induction. Statistics functions best when informed by a sense of how statistical methods address important difficulties in the rational acquisition of knowledge.
My biostatistical collaborations have focused on the study of environmental toxicants on human fertility (work with Ulrike Luderer of UCI) and on brain connectivity differences between Schizophrenia patients and healthy controls (work with the Mind Research Network).
I am looking for collaborations in the area of psychology and the cognitive sciences. My own background with psychology—particularly industrial-organizational psychology and meta-analytic methods—makes this an area of special interest for me.
I am currently looking for collaborations across academic disciplines. If you have a problem you think I might find interesting, please let me know by email.
Elements of Mathematical Statistics teaches students about uncertainty: how to express it mathematically, how to understand it probabilistically, and how to deal with it statistically. When I teach this course, I spend the first half of the semester focusing on probability rules and common probability distributions. In the second half of the semester, I apply what we've learned about probability to real-world data, introducing statistical analysis and explaining its role in scientific inquiry.
Syllabus: sp18syllabus.pdf
Introduction to Bayesian Modeling is a first course in applied Bayesian data analysis. Knowledge of probability and regression modeling is expected. Students are introduced to subjectivist notions of probability and how outside expert information can be incorporated into data analysis through informative prior distributions. As I teach it, this course is heavily focused on statistical thinking and data analysis—not on the deeper math associated with Bayesian inference and MCMC methods.
Data Analysis Guide: Data_Analysis_Guide.pdf
Diasorin Data Analysis: Diasorin_DA.pdf
Poly-Aromatic Hydrocarbon Data Analysis: PAH_DA.pdf
Syllabus: sp18syllabus.pdf
Intermediate Bayesian Modeling builds on the material from STATS 477 / 577 by providing a deeper exploration of Bayesian inference and Monte Carlo methods. This course includes proofs as well as more detailed mathematical treatments of key Bayesian results. The Metropolis-Hastings algorithm is introduced and students are guided to program their own sampling algorithms.
Room: Dane Smith Hall, Rm. 318
Time: Tuesday & Thursday, 12:30pm – 1:45pm
Prerequisites: STATS 477 / 577 – Introduction to Bayesian Modeling
Syllabus: fa18syllabus.pdf
The material here is derived from a series of lectures De Finetti gave at the Henri Poincaré Institute (IHP) in 1935. De Finetti's Theorem is expounded and proven in Chapter 3, with further discussion of exchangeability continuing in Chapters 4 and 5. Earlier, in Chapters 1 and 2, De Finetti gives his own development of the notion of probablity—which makes for entertaining reading, but is not of critical importance for this class. Chapter 6 provides a philosophical summation of De Finetti's perspective and how this work fits into that perspective.
Chapter 3 of my dissertation included a review of many of the model selection tools we've discussed in class, including a proof for the asymptotic equivalence of DIC and AIC under certain conditions. If you're looking for a good textual review of what I've done in lecture, this should work well.
UIowa's Joe Cavanaugh is an expert on model selection and information criteria. He is particularly good at distilling difficult mathematical arguments into easy-to-follow derivations. In the above papers, he lays out the detailed theoretical justifications for AIC and BIC in a way that should be understandable to well prepared students of statistics.
Celeux et al. (2006) provide an in-depth discussion of how DIC can be operationalized for missing data models (including mixed models). Aside from its application to DIC, this is a good read purely for how it asks the reader to think harder about the deeper structure of missing data models.
Chi Feng, a research assistant at MIT's computational design laboratory, created an interactive gallery for visualizing different Monte Carlo sampling algorithms. Many of the algorithms demonstrated here are ones we haven't talked about in class, but this can help us visualize the behavior of the Metropolis algorithm as well as Hamiltonian Monte Carlo.
One of my Bayes students, Mustafa Salman, forked Chi Feng's original code and added a rejection rate display in the top-left corner. This allows us to view the same demonstrations, but to also better understand how modifying the tuning parameters affects the rejection rates. Good for understanding optimality issues. (GitHub code available here)
Homework 1 (tex) – Due 20 September (Solutions available)
Homework 2 (tex) – Due 9 October (Solutions available)
Homework 3 (tex) – Due 16 October (Solutions available)
Homework 4 (tex) – Due 13 November (Solutions available)
Instructions (tex) – Due 13 December by 12pm (noon)
citations.csv – Data file for final project
Assistant Professor
Albuquerque, NM
ronald@stat.unm.edu
(505) 277-4613
Ph.D. in Statistics
M.S. in Statistics
Pg.Dip. in Japanese Language and Culture
B.A. in Psychology
B.S. in Mathematics
Languages
English
Japanese
French
Assistant Professor of Statistics
Graduate Teaching Assistant
Graduate Research Assistant
Graduate Teaching Assistant
Assistant Language Teacher (JET Program)
New Approaches to Model Selection in Bayesian Mixed Modeling – Dissertation
"Associations between urinary biomarkers of polycyclic aromatic hydrocarbon exposure and reproductive function during menstrual cycles in women" — Article in Environment International
"Thalamus and posterior temporal lobe show greater inter-network connectivity at rest and across sensory paradigms in schizophrenia" – Article in Neuroimage
"Nonparametric regression estimators in complex surveys" – Article in JSCS
I'm a genre fiction geek, especially for sci-fi/fantasy. I've also been known to settle down with a good mystery or spy novel. Beyond that, I'm a hobby writer when I can find the free time, and a sometimes-Hugo-Award-voter. This section of the site is dedicated to all things fictive.
Extracts from the International Association of Time Travelers: Members' Forum.
Interesting stories get told at scientific conferences.
The pain of living in your brother's shadow, except in space with lasers.
"How can the net amount of entropy of the universe be massively decreased?"
This podcast, now in its 12th season, is run by four high-profile genre fiction authors: Brandon Sanderson, Dan Wells, Howard Tayler, and Mary Robinette-Kowal. Not only are their insights pretty phenomenal, but they regularly bring in guests for alternative perspectives. Podcast episodes run about 15 minutes—but they always end with a writing prompt, so you might wind up spending more time than that on each episode.
Brandon Sanderson, one of the hosts of the Writing Excuse podcast and my favorite author, teaches a yearly creative writing class at Brigham Young University (BYU). His lectures occasionally get recorded and posted online (with his endorsement).
Coming... eventually.
On this page, I provide a number of links to content that's both statistics-related and statistics-unrelated. Access at your own risk.
The statistical analysis software R free to download and relatively easy to learn. I use it for most of my data analysis needs.
RStudio requires an installation of R, but makes managing R code and output much easier.
For Bayesian analyses, I often use OpenBUGS—a program for performing Gibbs sampling from a specified model. BUGS is an acronym for Bayesian inference Using Gibbs Sampling. A number of alternatives to OpenBUGS exist, notably including JAGS (Just Another Gibbs Sampler), Stan, and NIMBLE (which I like, but whose acronym is too long and arcane for me to bother repeating).
If you're a graduate student, you should be working right now! Perusal of the following links is not advised.
Every statistician should know the online webcomic xkcd. There isn't enough statistics-related humor in the world, but what there is
can often be found in this comic.
If you have even the slightest inclination toward pop-culture geekery and you're looking to lose 2-3 days of your life, TV Tropes will
help you in your quest. (Graduate students, I warned you not to click these links.)
the fire is dead. the room is freezing.
Are you looking for UNM Statistics' other Dr. Christensen? You can find his webpage
Office: SMLC 328
Spring 2020 Office Hours: