Kinematics 3 – Disk galaxy rotation curves (Gaussian process version)

The simple polynomial model for rotation velocities that I wrote about in the last two posts seems to work well, but there are some indications of model misspecification. The relatively large expansion velocities imply large scale noncircular motions which are unexpected in an undisturbed disk galaxy (although the bar may play a role in this one), but also may indicate a problem with the model. There is also some spatial structure in the residuals, falsifying the initial assumption that residuals are iid. Since smoothing splines proved to be intractable I turned to Gaussian Process (GP) regression for a more flexible model representation. These are commonly used for interpolation and Stan offers support for them, although it’s somewhat limited in the current version (2.17.x).

I know little about the theory or practical implementation of GP’s, so I will not review the Stan code in detail. The current version of the model code is at <https://github.com/mlpeck/vrot_stanmodels/blob/master/vrot_gp.stan>. Much of the code was borrowed from tutorial introductions by Rob Trangucci and Michael Betancourt. In particular the function gp_pred_rng() was taken directly from the former with minimal modification.

There were several complications I had to deal with. First, given a vector of variates y and predictors x the gaussian process model looks something like

$$\mathbf{y} \sim \mathcal{GP}(\mathsf{m}(\mathbf{x}), \mathsf{\Sigma}(\mathbf{x}))$$

that is the GP is defined in terms of a mean function and a covariance function. Almost every pedagogical introduction to GP’s that I’ve encountered immediately sets the mean function to 0. Now, my prior for the rotation velocity definitely excludes 0 (except at r=0), so I had to retain a mean function. In order to get calculating I again just use a low order polynomial for the rotation velocity. The code for this in the model block is

  v ~ normal(v_los, dv);

  cov = cov_exp_quad(xyhat, alpha, rho);
  for (n in 1:N) {
    cov[n, n] = cov[n, n] + square(sigma_los);
  }
  L_cov = cholesky_decompose(cov);
  v_los ~ multi_normal_cholesky(v_sys + sin_i * (P * c_r) .* yhat, L_cov);

The last line looks like the previous model except that I’ve dropped the expansion velocity term because I do expect it to be small and distributed around 0. Also I replaced the calculation of the polynomial with a matrix multiplication of a basis matrix of polynomial values with the vector of coefficients. For some unknown reason this is slightly more efficient in this model but slightly less efficient in the earlier one.

A second small complication is that just about every pedagogical introduction I’ve seen presents examples in 1D, but the covariates in this problem are 2D. That turned out to be a small issue, but it did take some trial and error to find a usable data representation and there is considerable copying between data types. For example there is an N x 2 matrix Xhat which is defined as the result of a matrix multiplication. Another variable declared as row_vector[2] xyhat[N]; is required for the covariance matrix calculation and contains exactly the same values. All of this is done in the transformed parameters block.

The main objective of a gaussian process regression is to predict values of the variate at new values of the covariates. That is the purpose of the generated quantities block, with most of the work performed by the user defined function gp_pred_rng() that I borrowed. This function can be seen to implement equations 2.23-2.24 in the book “Gaussian Processes for Machine Learning” by Rasmussen and Williams 2006 (the link goes to the book website, and the electronic edition of the book is a free download). Those equations were derived for the case of a 0 mean function, but that’s not what we have here. In an attempt to fit my model into the mathematical framework outlined there I subtract the systematic part of the rotation model from the latent line of sight velocities to feed a quantity with 0 prior mean to the second argument to the posterior predictive function:

  v_gp = gp_pred_rng(xy_pred, v_los-v_sys-sin_i* (P * c_r) .* yhat, xyhat, alpha, rho, sigma_los);

then they are added back to get the predicted, deprojected, velocity field:

  v_pred = (v_gp + sin_i * (poly(r_pred, order) * c_r) .* y_pred) *v_norm/sin_i;

Yet another complication is that the positions in the plane of the disk are effectively parameters too. I don’t know if the posterior predictive part of the model is correctly specified given these complications, but I went ahead with it anyway.

The positions I want predictions for are for equally spaced angles in a set of concentric rings. The R code to set this up is (this could have been done in the transformed data block in Stan):

  n_r <- round(sqrt(N_xy))
  N_xy <- n_r^2
  rho <- seq(1/n_r, 1, length=n_r)
  theta <- seq(0, (n_r-1)*2*pi/n_r, length=n_r)
  rt <- expand.grid(theta, rho)
  x_pred <- rt[,2]*cos(rt[,1])
  y_pred <- rt[,2]*sin(rt[,1])

The reason for this is simple. If the net, deprojected velocity field is given by

$$\frac{v – v_{sys}}{\sin i} = v_{rot}(r) \cos \theta + v_{exp}(r) \sin \theta$$

then, making use of the orthogonality of trigonometric functions

$$v_{rot}(r) = \frac{1}{\pi} \int^{2\pi}_0 (v_{rot} \cos \theta + v_{exp} \sin \theta) \cos \theta \mathrm{d}\theta$$

and

$$v_{exp}(r) = \frac{1}{\pi} \int^{2\pi}_0 (v_{rot} \cos \theta + v_{exp} \sin \theta) \sin \theta \mathrm{d}\theta$$

The next few lines of code just perform a trapezoidal rule approximation to these integrals:


  vrot_pred[1] = 0.;
  vexp_pred[1] = 0.;
  for (n in 1:N_r) {
    vrot_pred[n+1] = dot_product(v_pred[((n-1)*N_r+1):(n*N_r)], cos(theta_pred[((n-1)*N_r+1):(n*N_r)]));
    vexp_pred[n+1] = dot_product(v_pred[((n-1)*N_r+1):(n*N_r)], sin(theta_pred[((n-1)*N_r+1):(n*N_r)]));
  }
  vrot_pred = vrot_pred * 2./N_r;
  vexp_pred = vexp_pred * 2./N_r;

These are then used to reconstruct the model velocity field and the difference between the posterior predictive field and the systematic portion of the model:


  for (n in 1:N_r) {
    v_model[((n-1)*N_r+1):(n*N_r)] = vrot_pred[n+1] * cos(theta_pred[((n-1)*N_r+1):(n*N_r)]) +
                                     vexp_pred[n+1] * sin(theta_pred[((n-1)*N_r+1):(n*N_r)]);
  }
  v_res = v_pred - v_model;

Besides some uncertainty about whether the posterior predictive inference is specified properly there is one very important problem with this model: the gradient computation, which tends to be the most computationally intensive part of HMC, is about 2 orders of magnitude slower than for the simple polynomial model discussed in the previous posts. The wall time isn’t proportionately this much larger because fewer “leapfrog” steps are typically needed per iteration and the target acceptance rate can be set to a lower value. This still makes analysis of data cubes quite out of reach, and as yet I have only analyzed a small number of galaxies’ RSS based data sets.

Here are some results for the sample data set from plateifu 8942-12702. This galaxy is too big for its IFU — the cataloged effective radius is 16.2″, which is about the radius of the IFU footprint. Therefore I set the maximum radius for predicted values to 1reff.

The most important model outputs are the posterior predictive distributions for the rotational and expansion velocities:

vrot_vexp_gp
V_rot, V_exp, Gaussian process model

The rotational velocity is broadly similar to the polynomial model, but it has a very long plateau (the earlier model was beginning to turn over by 1 reff) and much larger uncertainty intervals. The expansion velocity curve displays completely different behavior from the polynomial model (note that the vertical scales of these two graphs are different). The mean predicted value is under 20 km/sec everywhere, and 0 is excluded only out to a radius around 6.5″, which is roughly the half length of the bar.

The reason for the broad credible intervals for the rotation velocity in the GP model compared to the polynomial model can be seen by comparing the posteriors of the sine of the inclination angle. The GP model’s is much broader and has a long tail towards very low inclinations. This results in a long tail of very high deprojected rotational velocities. Both models favor lower inclinations than the photometric estimate shown as a vertical line.

sin_i_comp
Posteriors of sin_i for polynomial and GP models

I’m going to leave off with one more graph. I was curious if it was really necessary to have a model for the rotation velocity, so I implemented the common pedagogical case of assuming the prior mean function is 0 (actually a constant to allow for an overall system velocity). Here are the resulting posterior predictive estimates of the rotation and expansion velocities:

vrot_vexp_mean0
V_rot, V_exp for GP model with constant prior mean

Conclusion: it is. There are several other indications of massive model failure, but these are sufficient I think.

In a future post (maybe not the next one) I’ll discuss an application that could actually produce publishable research.

Kinematics 2a – Disk galaxy rotation curves

I’m going to take a selective look at some of the Stan code for this model, then show some results for the RSS derived data set. The complete stan file is at <https://github.com/mlpeck/vrot_stanmodels/blob/master/vrot.stan>. By the way development is far from frozen on this little side project, so this code may change in the future or I may add new models. I’ll try to keep things properly version controlled and my github repository in sync with my local copy.

Stan programs are organized into named blocks that declare and sometimes operate on different types of variables. There are seven distinct blocks that may be present — I use 6 of them in this program. The functions block in this program just defines a function that returns the values of the polynomial representing the rotational and expansion velocities. The data block is also straightforward:

data {
int<lower=1> order; //order for polynomial representation of velocities
int<lower=1> N;
vector[N] x;
vector[N] y;
vector[N] v; //measured los velocity
vector[N] dv; //error estimate on v
real phi0;
real<lower=0.> sd_phi0;
real si0;
real<lower=0.> sd_si0;
real<lower=0.> sd_kc0;
real<lower=0.> r_norm;
real<lower=0.> v_norm;
int<lower=1> N_r; //number of points to fill in
vector[N_r] r_post;
}

The basic data inputs are the cartesian coordinates of the fiber positions as offsets from the IFU center, the measured radial velocities, and the estimated uncertainties of the velocities. I also pass the polynomial order for the velocity representations and sample size as data, along with several quantities that are used for setting priors.

The parameters block is where the model parameters are declared, that is the variables we want to make probabilistic statements about. This should be mostly self documenting given the discussion in the previous post, but I sprinkle in some C++ style comments. I will get to the length N vector parameter v_los shortly.

parameters {
  real phi;
  real<lower=0., upper=1.> sin_i;  // sine disk inclination
  real x_c;  //kinematic centers
  real y_c;
  real v_sys;     //system velocity offset (should be small)
  vector[N] v_los;  //latent "real" los velocity
  vector[order] c_rot;
  vector[order] c_exp;
  real<lower=0.> sigma_los;
}

In this program the transformed parameters block mostly performs the matrix manipulations I wrote about in the last post. These can involve both parameters and data. A few of the declarations here improve efficiency in minor but useful ways recommended in the Stan help forums by members of the development team. For example the statement real sin_phi = sin(phi); saves recalculating the sine further down.

transformed parameters {
  vector[N] xc = x - x_c;
  vector[N] yc = y - y_c;
  real sin_phi = sin(phi);
  real cos_phi = cos(phi);
  matrix[N, 2] X;
  matrix[N, 2] Xhat;
  matrix[2, 2] Rot;
  matrix[2, 2] Stretch;
  vector[N] r;
  vector[N] xhat;
  vector[N] yhat;

  X = append_col(xc, yc);
  Rot = [ [-cos_phi, -sin_phi],
          [-sin_phi, cos_phi]];
  Stretch = diag_matrix([1./sqrt(1.-sin_i^2), 1.]');
  Xhat = X * Rot * Stretch;
  xhat = Xhat[ : , 1];
  yhat = Xhat[ : , 2];
  r = sqrt(yhat .* yhat + xhat .* xhat);
}

The model block is where the log probability is defined, which is what the sampler is sampling from:

model {
  phi ~ normal(phi0, sd_phi0);
  sin_i ~ normal(si0, sd_si0);
  x_c ~ normal(0, sd_kc0/r_norm);
  y_c ~ normal(0, sd_kc0/r_norm);
  v_sys ~ normal(0, 150./v_norm);
  sigma_los ~ normal(0, 50./v_norm);
  c_rot ~ normal(0, 1000./v_norm);                                                                        
  c_exp ~ normal(0, 1000./v_norm);

  v ~ normal(v_los, dv);
  v_los ~ normal(v_sys + sin_i * (
                  sump(c_rot, r, order) .* yhat + sump(c_exp, r, order) .* xhat),
                  sigma_los);
}

I try to be consistent about setting priors first in the model block and using proper priors that are more or less informative depending on what I actually know. In this case some of these are hard coded and some are passed to Stan as data and can be set by the user. I find it necessary to supply fairly informative priors for the orientation and inclination angles. For the latter it’s because of the weak identifiability of the inclination. For the former it’s to prevent mode hopping.

Stan’s developers recommend that parameters should be approximately unit scaled, and this usually entails rescaling the data. In this case we have both natural velocity and length scales. Typical peculiar velocities are on the order of no more than a few hundreds of km/sec., so I scale them to units of 100 km/sec. The MaNGA primary sample is intended to have each galaxy observed out to 1.5Reff, where Reff is the (r band) half light radius, so the maximum observed radius should typically be ∼1.5Reff. Therefore I typically divide all position measurements by that amount with the effective radius taken from nsa_elpetro_th50_r in drpcat. Other catalog values that I use as inputs to the model are nsa_elpetro_phi for the orientation angle and nsa_elpetro_ba for the cosine of the inclination. These are used as the central values of the priors and also to initialize the sampler.

Once I solved the problem of parametrizing angles the next source of pathologies I encountered was a tendency towards multimodal kinematic centers. This usually manifests as one or more chains being displaced from the others rather than mode hopping within a chain, although that can happen too. In the typical case the extra modes would place the center in an adjacent fiber to the one closest to the IFU center, but sometimes much larger jumps happen. Although I wouldn’t necessarily rule out the possibility of actual multimodality when it happens it has a severe effect on convergence diagnostics and on most other model parameters. I therefore found it necessary to place a tighter prior on the kinematic centers than I would have liked based solely on prior knowledge: the default prior puts about 95% of the prior mass within the radius of the central fiber. This could overconstrain the kinematic center in some cases, particularly low mass galaxies with weak central concentration.

A very common situation with astronomical data is to have quantities measured with error with an accompaying uncertainty estimate for each measurement. My code for estimating redshift offsets kicks out an uncertainty estimate, so this is no exception. I take a standard approach to modeling this: the unknown “true” line of sight velocity is treated as a latent variable, with the measured velocity generated from a distribution having mean equal to the latent variable and standard deviation the estimated uncertainty. In this case the distribution is assumed to be Gaussian. The latent variable is then treated as the response variable in the model. Since I had no particular expectations about what the residuals from the model should look like I fell back on the statistician’s standard assumption that they are independent and identically distributed gaussians with standard deviation to be estimated as part of the model.

Having a latent parameter for every measurement seems extravagant, but in fact it causes no issues at all except for a slight performance penalty. With the distributional assumptions I made the latent variables can be marginalized out and posteriors for the remaining parameters will be the same within typical Monte Carlo variability. The main performance difference seems to arise from slower adaptation in the latent variables version.

Finally, some results from the observation with plateifu 8942-12702 (mangaid 1-218280). First, the measured velocity field and the posterior mean model. The observations came from the same data shown in the previous post interpolated onto a finer grid.

vf_mvf_1-218280
Measured and model velocity field – mangaid 1-218280

The model seems not too bad except for missing some high spatial frequency details. The residuals may be revealing (again this is the posterior mean difference between observations and model):

vfres_1-218280
residual velocity field

There is a strong hint of structure here, with a ridge of positive residuals that appears to follow the inner spiral arms at least part of the way around. There are also two regions of larger than normal residuals on opposite sides of the nucleus and with opposite signs. In this case they appear to be at or near the ends of the bar. This is a common feature of these maps — I will show another example or two later.

Next are the joint posterior distributions of \(v_{rot}\) with radius r and \(v_{exp}\) with r. Error bars are 95% credible intervals.

vrot_vexp_1-218280
Joint posteriors of v_rot, r and v_exp, r

The rotation curve seems not unrealistic given the limited expressiveness of the polynomial model. The initial rise is slow compared to typical examples in the literature, but this is probably attributable to the limited spatial resolution of the data. The peak value is completely reasonable for a fairly massive spiral. The expansion velocity on the other hand is surprising. It should be small, but it’s not especially so. Nowhere do the error bars include 0.

There are a number of graphical tools that are useful for assessing MCMC model performance. Here are some pairs plots of the main model parameters:

phi_sini_crot_1-218280
Pairs plot of posteriors of phi, sin_i, and c_rot
phi_sini_cexp_1-218280
Pairs plot of posteriors of phi, sin_i, and c_exp
xc_yc_vsys_1-218280
Pairs plot of x_c, y_c, v_sys

There are several significant correlations here, none of which are really surprising. The polynomial coefficients are strongly correlated. The inclination angle is strongly correlated with rotation velocity and slightly less, but still significantly correlated with expansion velocity. The orientation angle is slightly correlated with rotation velocity but strongly correlated with expansion. The estimated system velocity is strongly correlated with the x position of the kinematic center but not y — this is because the rotation axis is nearly vertical in this case. The relatively high value of v_sys (posterior mean of 25 km/sec) is a bit of a surprise since the SDSS spectrum appeared to be well centered on the nucleus and with an accurate redshift measurement. Nevertheless the value appears to be robust.

Stan is quite aggressive with warnings about convergence issues, and it reported none for this model run. No graphical or quantitative diagnostics indicated any problems. Nevertheless there are several indicators of possible model misspecification.

  1. Signs of structure in residual maps are interesting because they indicate perturbations of the velocity field as small as ∼10 km/sec by spiral structure or bars may be detectable. But it also indicates the assumption of iid residuals is false.
  2. It’s hard to tell from visual inspection if the symmetrically disposed velocity residuals near the nucleus are due to rotating rings of material or streaming motions, but in any case a low order polynomial can’t be fit to them. I will show more dramatic examples of this in a future post.
  3. Large scale expansion (or contraction) shouldn’t be seen in a disk galaxy, but the estimated expansion velocity is rather high in this case. Perturbations by the rather prominent bar would indicate that a model with a \(2 \theta\) angular dependence should be explored.

Next up, a possible partial solution.

 

Kinematics 2 – Disk galaxy rotation curves (simple version)

In the course of studying star formation histories I’ve accumulated lots of velocity fields similar to the ones I just posted about. For purposes of SFH modeling their only use is to reduce wavelengths to the galaxy rest frame. It’s not sufficient just to use the system redshift because peculiar velocities of a few hundred km/sec (typical for disk galaxies) translate into spectral shifts of 2-3 pixels, enough to seriously impair spectral fits. Naturally however I wanted to do something else with these data, so I decided to see if I could model rotation curves. And, since I’m interested in Bayesian statistics and was using it anyway, I decided to try Stan as a modeling tool. This is hardly a new idea; in fact I was motivated in part by a paper I encountered on arxiv by Oh et al. (2018), who attempt a more general version of essentially the same model.

The basic idea behind these models is that if the stars and/or gas are confined to a thin disk and moving in its plane we can recover their full velocities from the measured radial velocities when the plane of the galaxy is tilted by a moderate amount to our line of sight (inclination angles between about 20o and 70o are generally considered suitable). The equations describing the tilted disk model are given in, among many other places,  Teuben (2002). I reproduce them here with some minor notational changes. The crude sketch below might or might not make the geometry of this a little clearer. The galaxy is inclined to our line of sight by some angle \(i\)  (where 0 is face-on), with the position angle of the receding side \(\phi\), by convention measured counterclockwise from north. The observed radial velocity \(v(x,y)\) at cartesian coordinates \((x,y)\) in the plane of the sky can be written in terms of polar coordinates \((r, \theta)\) in the plane of the galaxy as

$$v(x,y) = v_{sys} +  \sin i  (v_{rot} \cos \theta + v_{exp} \sin \theta) $$

where

$$\sin \theta = \frac{- (x-x_c)  \cos \phi – (y-y_c) \sin \phi}{r \cos i}$$

$$\cos \theta = \frac{ – (x-x_c) \sin \phi + (y-y_c) \cos \phi}{r}$$

and \(v_{sys}, v_{rot}, v_{exp}\) the overall system velocity at the kinematic center \((x_c, y_c)\), rotational (or transverse), and expansion (or radial) components of the measured line of sight velocity.

 

ellipse
the tilted disk model

The transformation between coordinates on the sky and coordinates in the plane of the galaxy can be written as a sequence of vector and matrix operations consisting of a translation, rotation, and a stretch:

$$[\hat x, \hat y] = [(x – x_c), (y-y_c)] \mathbf{R S}$$

$$ \mathbf{R} = \left( \begin{array}{c} -\cos \phi & -\sin \phi \\ -\sin \phi & \cos \phi \end{array} \right)$$

$$ \mathbf{S} = \left( \begin{array}{c} 1/\cos i & 0\\ 0 & 1 \end{array} \right)$$

and now the first equation above relating the measured line of sight velocity to velocity components in the disk can be rewritten as

$$v(x, y) = v_{sys} + \sin i [v_{rot}(\hat x, \hat y) \hat y/r + v_{exp}(\hat x, \hat y) \hat x/r]$$

Note that I am defining, somewhat confusingly,

$$\cos \theta = \hat y/r \\ \sin \theta = \hat x/r$$

with

$$r = \sqrt{\hat x^2 + \hat y^2}$$

This is just to preserve the usual convention that the Y axis points up, while also preserving the astronomer’s convention that angles are measured counterclockwise from north.

Stan supports a full complement of matrix operations and it turns out to be slightly more efficient to code these coordinate transformations as matrix multiplications, even though it involves some copying between different data types.

So far this is just geometry. What gives the model physical content is that both \(v_{rot}\) and \(v_{exp}\) are assumed to be strictly axisymmetric. This seems like a very strong assumption, but these can be seen as just the lowest order modes of a Fourier expansion of the velocity field. There’s no reason in principle why higher order Fourier terms can’t be incorporated into the model. The appropriateness of this first order model rests on whether it fits the data and its physical interpretability.

How to represent the velocities posed, and continues to pose, a challenge. Smoothing splines have desirable flexibility and there’s even a lengthy case study showing how to implement them in Stan. I tried to adapt that code and found it to be quite intractable, the problem being that coordinates in the disk frame are parameters, not data, so the spline basis has to be rebuilt at each iteration of the sampler.

In order to get started then I just chose a polynomial representation. This isn’t a great choice in general because different orders of the polynomial will be correlated which leads to correlated coefficient estimates. If the polynomial order chosen is too high the matrix of predictors becomes ill conditioned and data can be overfit. This is true for both traditional least squares and Bayesian methods. The minimum order to be included is linear, because both \(v_{rot}\) and \(v_{exp}\) have limiting values of 0 at radius 0. This is both because of continuity and identifiability: a non-zero velocity at 0 is captured in the scalar parameter \(v_{sys}\). It takes at least a 3rd order polynomial to represent the typical shape of a rotation curve; I found that convergence issues started to become a problem at as low as 4th order, so I just picked 3rd order for the models reported here. In the next or a later post I will show a partial solution to a more flexible representation, and I may revisit this topic in the future.

There are two angles in the model, and this presents lots of interesting complications. First, it’s reasonable to ask if these should be treated as data or parameters of the model. There are useful proxies for both in the photometric data provided in the MaNGA catalog, in fact there are several. Some studies do in fact fix their values. I think this is both a conceptual and practical error. First, these are photometric measurements not kinematic ones. They aren’t the same thing. Second, they are subject to uncertainty, possibly a considerable amount. Failing to account for that could lead to seriously underestimated uncertainties in the velocities. In fact I think much of the literature is significantly overoptimistic about how well constrained are rotation velocities.

So, both the inclination and major axis orientation are parameters. And this immediately creates other issues. Hamiltonian Monte Carlo requires gradient evaluation, and the version implemented in Stan requires the gradient to exist everywhere in \(\mathbb{R}^N\). Bounded parameters are automatically transformed to create new, unbounded ones. But this creates an acknowledged problem that circles can’t be mapped to the real line, at least in a way that preserves probability measure everywhere. There are a couple potential ways to avoid this problem, and I chose different ones for the inclination and orientation angles.

The inclination turned out to be the easier to deal with, at least in terms of model specification. First, the inclination angle is constrained to be between 0 and 90o. Instead of the angle itself I make the parameter \(\sin i\) (or \(\cos i\); it turns out to make no difference at all which is used). This is constrained to be between 0 and 1 in its declaration. Stan maps this to the real line with a logistic transform, so the endpoints map to \(\mp \infty\). But that creates no issue in practice because recall this model is only applicable to disk galaxies with moderate inclination. Rotation can’t be measured at all in an exactly face on galaxy. Rotation is all that can be measured in an exactly edge on one, but this model is misspecified for that case. If contrary to expectations the posterior wants to pile up near 0 or 1 it tells us either that something is wrong with the model or data or that the actual inclination is outside the usable range.

Even though the parametrization is straightforward inference about the inclination can be problematic because it is only weakly identified: notice it enters the velocity equation multiplicatively so, for example, halving it’s value and simultaneously doubling both velocity components produces exactly the same likelihood value. Identifiability therefore is only achieved through the stretch operation and through the prior.

There are two possible ways to parametrize the orientation angle, which is determined modulo \(2 \pi\). I chose to make the angle the parameter and to leave it unbounded. This creates a form of what statisticians call a label switching problem: suppose we try an improper uniform prior for this parameter (generally not a good idea), and suppose there’s a mode in the posterior at some angle \(\hat \phi\). Then there will be other modes at \(\hat \phi \pm \pi\) with the signs of the velocities flipped, and at \(\hat \phi \pm \pi/2\) with \(v_{rot}\) taking the role of \(v_{exp}\) and vice versa. Therefore even if there’s a well behaved distribution around \(\hat \phi\) it will be replicated infinitely many times, making the posterior improper. The solution to this is to introduce a proper prior and in practice it has to be informative enough to keep the sampler from hopping between modes.

Since the orientation angle enters the model only through its cosine and sine the other solution available in Stan is to declare the cosine/sine pair as a 2 dimensional unit vector. This solves the potential improper posterior problem, but it doesn’t solve the mode hopping problem; a moderately informative prior is still needed to keep the elements of the vector from flipping signs.

I’ve tried both the direct parameterization and the unit vector version, and both work with appropriate priors. I chose the former mostly because I expect the angle to have a fairly symmetrical posterior, which makes a gaussian prior a reasonable choice.  Choosing a prior for the unit vector that makes the posterior of the angle symmetrical seems a trickier task. It turns out that the data usually has a lot to say about the orientation angle, so as long as mode hopping is avoided its effect on inferences is much less problematic than the inclination.

Well, I’ve been more verbose than expected, so again I’ll stop the post halfway through. Next time I’ll take a selective look at the code and some results from the data set introduced last time.

 

Kinematics 1b – Velocity offsets finally

Now that I have some data in a form suitable for further analysis my next step is to estimate radial velocity offsets. Part of the metadata is a system redshift z taken from the MaNGA observation catalog, which in turn is derived from the NSA catalog. Most redshifts are from the SDSS legacy survey, and usually but not always the spectroscopic target is close to the system nucleus, which is generally at the IFU center. What I actually measure then are redshift offsets from the cataloged system redshift, and I expect those most of the time to be small. The algorithm I developed uses a straightforward, not quite brute force template matching procedure that I believe, without having seen the code, is similar to that used in the SDSS spectro pipeline. This is a simpler task though because there’s no need to distinguish among stars, galaxies, and QSO’s and the range of redshifts to be probed is much smaller.

The templates I use are a set of 15 eigenspectra from a principal components analysis of ∼16,000 SDSS legacy survey galaxy spectra using an algorithm published by Tsalmantza and Hogg (2012). These are of some independent interest, but I will defer writing about them until a later post. The algorithm returns a single blended estimate for both stars and ionized gas. This differs from the MaNGA science team’s data analysis pipeline which produces separate estimates for stars and gas. From the small number of thumbnail size plots of velocity fields that have been published it’s evident that one or the other of these estimates is usually significantly noisier. Blended estimates should be cleaner provided ionized gas and stars are kinematically tightly coupled. This isn’t always true — in fact there is at least one published paper studying MaNGA galaxies with kinematically decoupled gas and stars (Jin et al. 2016). It also performs poorly if multiple velocity components are present, as in ongoing mergers, or if spectra are contaminated by foreground stars.

For completeness, and because the function is fairly small I reproduce the code here:

getdz <- function(gdat, lib, snrthresh=5, nlthresh=2000, dzlim=0.003, searchinterval=1e-4) {
  dims <- dim(gdat$flux)
  nr <- dims[1]
  nc <- dims[2]
  dz <- matrix(NA, nr, nc)
  dz.err <- matrix(NA, nr, nc)
  z <- gdat$meta$z
  lambda <- gdat$lambda
  fmla <- as.formula(paste("f ~", paste(names(lib)[-1], collapse="+")))
  fn <- function(x) {
    lambda.out <- lambda/(1+z+x)
    lib.out <- regrid(lambda.out, lib)
    lfit <- lm(fmla, data=lib.out, weights=iv)
    deviance(lfit)
  }
  fn_v <- Vectorize(fn)
  z_search <- seq(-dzlim, dzlim, by=searchinterval)

  for (i in 1:nr) {
    for (j in 1:nc) {
      if (is.na(gdat$snr[i,j]) || gdat$snr[i,j]<=snrthresh) next
        f <- gdat$flux[i, j, ]
        if (length(which(!is.na(f))) < nlthresh) next
          iv <- gdat$ivar[i, j, ]
          dev_grid <- fn_v(z_search)
          z0 <- z_search[which.min(dev_grid)]
          bestz <- Rsolnp::solnp(pars=z0, fun=fn, 
                                 LB=z0-searchinterval, UB=z0+searchinterval, 
                                 control=list(trace=0))
          dz[i,j] <- bestz$pars
          dz.err[i,j] <- sqrt(2./bestz$hessian)
    }
  }
  list(dz=dz, dz.err=dz.err)
}

I’ll review a few points about this function. The arguments gdat and lib are the IFU data described in the last post and a data frame with the rest frame template wavelengths and eigenspectra. I’ve found that sometimes large portions of spectra are masked by the DRP; often these are near the edge of the IFU and have very low signal to noise, so I set a minimum threshold for number of unmasked wavelength bins. The other arguments with default values just set reasonable limits for SNR, maximum redshift deltas, and grid size for preliminary search.

The function fn is the key section of code. Given the system redshift and a trial offset given in the function argument it calculates the galaxy restframe wavelengths, regrids the eigenspectra library to that vector of wavelengths, then fits it to the galaxy spectrum by weighted least squares and returns the deviance (weighted sum of squares). This function takes advantage of R’s rather peculiar scoping rules. When R encounters a name it checks the current environment for a value. If it finds one it uses it, otherwise it checks the parent environment, and if necessary the parent of the parent, etc. until it either finds the name or runs out of environments.

Just below the function definition I use the built in R function Vectorize, which takes a function that returns a single value and creates a new function that returns a vector. The brute force part of this code is a search over a uniformly spaced grid to isolate an interval containing the global minimum. This calls the vectorized version fn_v of fn, thus avoiding an explicit loop (which are generally best avoided in R). This also lets me write the grid search portion in two lines of code, which could have been condensed to one. Finally I use a general nonlinear optimization routine to refine the redshift offset estimate. I’ve found the function solnp in the independently developed Rsolnp package to be somewhat more robust than R’s built in options, but for this application it’s somewhat of an overkill. The return value from the function is a list of redshift offsets and uncertainties estimated from the local curvature. Here by the way is another R peculiarity. The value returned by a function is the last expression evaluated, which in this case is just the creation of a named list. Although there is a built in function return() nobody seems to quite know what it actually does and it’s rarely necessary to use it.

This function has considerable room for efficiency improvements. For one thing this task is embarrassingly parallel since each spaxel/fiber is evaluated independently, so this could easily be broken into chunks and fed to multiple cores when available. Also, the built in R function lm() for linear modeling does more than necessary and is somewhat slower than just doing the matrix algebra. So this function may change in the future.

Let’s take a look at a few examples. I modified the code to return the grid of deviance values for a single input spectrum. For this illustration I took the spectra from the eastern-most, central, and western-most fiber/pointing combinations from the stacked RSS data. The spectra are shown in the top row, with the run of deviance values with redshift offset in the bottom. Note there are many local minima in the deviance plots for the two corner spectra, which have very low S/N. This is probably due to accidental alignments of noise spikes with features in the eigenspectra rather than actual overlapping component populations. Somewhat surprisingly though the valleys in the vicinity of the global optima are even more steep sided than that for the central fiber, so the redshift offsets are quite well constrained. The presence of local optima is problematic for most nonlinear optimizers, which is why I use a brute force search to isolate global optima.

dzexamp Deviance vs. delta z for 3 fiber/pointing combinations

Next are the full velocity fields, first from the stacked RSS data and below it from the data cube. I set a signal to noise threshold of 3 for these, which seems optimistic but is usually OK for extracting velocities. The symbols in the first graph aren’t quite to scale — the fiber positions actually overlap somewhat. In the second graph contour lines are separated by 25 km/sec.

Experts in data visualization generally frown on the use of rainbows, but I think they’re OK for this one specific application. In my shared code I define

rainbow <- c("blue", "cyan", "green", "yellow", "red")  

as the set of additive and subtractive primary colors from blue through red. In ggplot2 plots of continuous variables this gets expanded to a fairly perceptually uniform spectrum optimized for display on a monitor.
vf-8942-12702 Example velocity field

vf_8942-12702-cube Example velocity field from data cube

I’m going to take a little closer look at the comparison between the RSS derived velocity field and the one from the cube. First I use the function interp from the package akima to interpolate the RSS data onto the same 74×74 grid as the data cube. The code to do that and display the results is:

> allok <- complete.cases(dv.stack)
> dv.interp <- akima::interp(x=gdat.stack$xpos[allok], y=gdat.stack$ypos[allok], z=dv.stack[allok], xo=-3600*gdat.cube$xpos, yo=3600.*gdat.cube$ypos, linear=FALSE)
> ggimage(dv.interp$z, dv.interp$x, dv.interp$y, col=rainbow, addContour = T, binwidth = 25, legend="dv km/sec")
vf_8942-12702-rss-interp
Interpolated velocity field from RSS data

These look pretty similar. Taking the difference:

vfdiff_8942-12702-rsscube
Velocity difference: interpolated RSS – cube

There’s some possible structure here — there’s an irregular ring of negative velocity differences punctuated with a small area of positive difference. It’s a little hard to relate these to features in the galaxy (and I’d like to develop better tools for this), but comparing to the SDSS supplied thumbnail with superposed IFU footprint the ring might correspond to the inner arms and perhaps where the bar meets the arms:

thumbnail for MaNGA plateifu 8942-12702

We’ll see later that arms and bars do measurably perturb velocities, and this suggests that different interpolation schemes might affect the fine details. But this is a higher order problem. Overall the velocity differences are symmetrically distributed around 0, with a standard deviation of just 2.7 km/sec and just slightly heavy tails:

hist_vdiff

Next up — doing something with velocity fields…

Kinematics 1a – Velocity offsets


I have been working mostly with spectroscopic data from SDSS-IV [MaNGA](http://www.sdss.org/dr14/manga/) since its first public release in 2016. The first few releases include little in the way of value-added data beyond calibrated spectra and some metadata taken mostly from the [NASA-Sloan Atlas](http://nsatlas.org/). In particular no kinematic measurements are provided, which are essential to most analyses. In the next several posts I’m going to write about estimating and using the first moment of the line of sight velocity distributions (LOSVD), that is velocity offsets from the system velocity. Keep in mind that estimating the moments of the LOSVD was the original objective of Cappellari and Emsellem’s `ppxf`, and anyone professionally engaged with the data should be using it or a similar tool that has passed peer review. But, since this is a hobby I developed my own tools. All software needed to replicate the examples posted here is available on my github repository at .

The first step in any data analysis is to acquire some data, and SDSS makes this really easy. The base URL for MaNGA data in DR14, which is still the current release at the time of writing, is . You drill down two levels deeper to download individual files. An observation is uniquely identified by a plate and IFU number, which are combined to make a plateifu id. Objects on the other hand are assigned a unique mangaid, which might be associated with multiple plateifu’s if observations are repeated. I’m going to work with the observation having plateifu 8942-12702 in the next several posts, which is in the directory 8942/stack. Calibrated data products come in two basic flavors: “row stacked spectra” (RSS) files and data cubes. Both types are available with either linear or logarithmically spaced wavelength bins — since convolution is an essential step in most analyses I work with the logarithmically spaced files, which have equal velocity widths of \( c \Delta \log_{10}(\lambda) \ln(10) \approx 69 \) km/sec per wavelength bin. The files I’ll work with are [manga-8942-12702-LOGCUBE.fits.gz](https://data.sdss.org/sas/dr14/manga/spectro/redux/v2_1_2/8942/stack/manga-8942-12702-LOGCUBE.fits.gz) and [manga-8942-12702-LOGRSS.fits.gz](https://data.sdss.org/sas/dr14/manga/spectro/redux/v2_1_2/8942/stack/manga-8942-12702-LOGRSS.fits.gz). This particular target is the nice barred spiral UGC 4300 (image credit SDSS):

![UGC 4300](http://skyserver.sdss.org/dr14/SkyServerWS/ImgCutout/getjpeg?TaskName=Skyserver.Chart.Image&ra=124.003321406132&dec=27.0758846629965&scale=0.4&width=512&height=512&opt=GL&query=&Grid=on&Label=on)

Browser based downloads are fine to get a file or two, but for bulk access it’s easiest to use a tool like `wget` or `rsync`. This is documented at . A small shell script helps automate the process:

“`{bash}
#! /bin/bash
wget -nv -nc -r -nH -nd -P. \
-i $1 \
-B http://data.sdss.org/sas/dr14/manga/spectro/redux/v2_1_2/

“`

This reads from a text file with the remainder of the path to the files to be downloaded listed one per line, fetches the files from the Science Archive Server, and stores them in the current working directory.

Next we need to get the data into R’s workspace. For this I use the R package [FITSio](https://cran.r-project.org/package=FITSio), which is the only package I’ve found for reading FITS files. While not as complete or polished as `astropy`’s FITS module it’s fine for reading in binary data tables. We also need some metadata from the MaNGA catalog, which is described [here](http://www.sdss.org/dr14/manga/manga-data/catalogs/). I have created a flat data table from the catalog and stored it in the R data file [drpcat.rda](https://github.com/mlpeck/vrot_stanmodels/blob/master/drpcat.rda) containing the R object `drpcat`. With the source files sourced into the R session the command to read a data cube is

“`{R}
gdat.cube <- readcube("manga-8942-12702-LOGCUBE.fits", drpcat) ``` For any reader unfamiliar with R syntax the symbol `<-` is the assignment operator (don't ask), so this line is just assigning the value returned by `readcube()` to the variable `gdat.cube`, which... ```{R} > str(gdat.cube)
List of 13
$ meta :List of 10
..$ mangaid : chr “1-218280”
..$ plateifu: chr “8942-12702”
..$ ra : num 124
..$ dec : num 27.1
..$ l : num 196
..$ b : num 29.6
..$ ebv : num 0.0367
..$ z : num 0.0255
..$ zdist : num 0.0254
..$ cpix : num [1:2] 38 38
$ xpos : num [1:74] 0.00514 0.005 0.00486 0.00472 0.00458 …
$ ypos : num [1:74] -0.00514 -0.005 -0.00486 -0.00472 -0.00458 …
$ lambda: num [1:4563(1d)] 3622 3622 3623 3624 3625 …
$ ra.f : num [1:74, 1:74] 124 124 124 124 124 …
$ dec.f : num [1:74, 1:74] 27.1 27.1 27.1 27.1 27.1 …
$ flux : num [1:74, 1:74, 1:4563] NA NA NA NA NA NA NA NA NA NA …
$ ivar : num [1:74, 1:74, 1:4563] NA NA NA NA NA NA NA NA NA NA …
$ snr : num [1:74, 1:74] NA NA NA NA NA NA NA NA NA NA …
$ gimg : num [1:74, 1:74] 0 0 0 0 0 0 0 0 0 0 …
$ rimg : num [1:74, 1:74] 0 0 0 0 0 0 0 0 0 0 …
$ iimg : num [1:74, 1:74] 0 0 0 0 0 0 0 0 0 0 …
$ zimg : num [1:74, 1:74] 0 0 0 0 0 0 0 0 0 0 …
“`

is a large list containing another list with some metadata and a number of numerical arrays, some taken directly from the FITS file and some processed in various ways. I correct the fluxes and inverse variances for galactic foreground extinction and also check the `mask` array for spaxels that should be masked. These are set to `NA` in the `flux` and `ivar` arrays. For this observation using the largest, 127 fiber, IFU these are stored in 74 x 74 x 4563 arrays, with the number of wavelength bins equal to 4563. The 74 x 74 spaxel field is somewhat larger than the IFU footprint so only some spaxels have data — in this example just 3,049 out of 5,476, or about 56%.

RSS files are a little more complicated to work with. They contain one spectrum for each exposure and each fiber, while for every observation the IFU is dithered to 3 distinct positions. Therefore if there are Nexp total exposures there will be Nexp/3 exposures for each fiber/pointing combination. In the RSS FITS files the flux and other data are stored in 4563 x Nexp image arrays, but to use the data we need to identify the distinct fiber/pointing combinations and combine the data. This is complicated by the fact that it isn’t entirely clear how the rows are stacked — this varies between data sets, although the first 3Nfiber spectra appear always to comprise one complete set of 3 pointings. A further complication is that because of differential refraction the position on the sky seen by each fiber varies with wavelength and with exposure. I chose to gloss over that by taking the mean position (over each wavelength range)
as the fiber position for that exposure. I then do a k-nearest neighbor lookup to find the Nexp-1 neighbors of the first 3Nfiber exposures. Then, with a bit of rearranging of data I end up storing the flux and inverse variance measurements in 3Nfiber x Nexp/3 x 4563 arrays.

Now this isn’t what I really want. What I want is to combine the exposures for each fiber/pointing combination to get a single spectrum and inverse variance estimates. I call this a stacked data set, and I wrote separate functions to read and manipulate the RSS files and to stack the data. These could be combined of course; I really have no use for the RSS data itself. In any case the current workflow goes like

“`{R}
> gdat.rss <- readrss("manga-8942-12702-LOGRSS.fits", drpcat) > str(gdat.rss)
List of 7
$ meta :List of 10
..$ mangaid : chr “1-218280”
..$ plateifu: chr “8942-12702”
..$ nexp : num 9
..$ ra : num 124
..$ dec : num 27.1
..$ l : num 196
..$ b : num 29.6
..$ ebv : num 0.0367
..$ z : num 0.0255
..$ zdist : num 0.0254
$ lambda: num [1:4563] 3622 3622 3623 3624 3625 …
$ flux : num [1:381, 1:3, 1:4563] 1.913 0.861 2.618 -2.61 5.185 …
$ ivar : num [1:381, 1:3, 1:4563] 0.124 0.161 0.14 0.177 0.119 …
$ snr : num [1:381, 1:3] 2.28 2.2 2.42 2.71 2.76 …
$ xpos : num [1:381, 1:3] 6.67 7.89 5.37 2.78 4.13 …
$ ypos : num [1:381, 1:3] -12.6 -10.2 -10.4 -10.5 -12.7 …
> gdat.stack <- stackrss(gdat.rss) > str(gdat.stack)
List of 10
$ meta :List of 10
..$ mangaid : chr “1-218280”
..$ plateifu: chr “8942-12702”
..$ nexp : num 9
..$ ra : num 124
..$ dec : num 27.1
..$ l : num 196
..$ b : num 29.6
..$ ebv : num 0.0367
..$ z : num 0.0255
..$ zdist : num 0.0254
$ lambda: num [1:4563] 3622 3622 3623 3624 3625 …
$ flux : num [1:381, 1, 1:4563] -0.538 2.972 1.319 -0.642 1.778 …
$ ivar : num [1:381, 1, 1:4563] 0.396 0.383 0.403 0.436 0.388 …
$ snr : num [1:381, 1] 3.93 3.89 4.18 4.7 4.81 …
$ xpos : num [1:381] 6.67 7.88 5.36 2.78 4.12 …
$ ypos : num [1:381] -12.6 -10.2 -10.4 -10.5 -12.7 …
$ ra.f : num [1:381] 124 124 124 124 124 …
$ dec.f : num [1:381] 27.1 27.1 27.1 27.1 27.1 …
$ dz : NULL
“`
Now, I work almost exclusively with these stacked derivatives of the RSS files. The main reason is necessity: the Bayesian models I work with, especially for star formation history modeling, are extremely computationally expensive. These can take some 10’s of minutes per spectrum with a state of the art desktop PC even with a smaller number of samples than I’d like and some other compromises. At, say, 3 per hour analyzing ∼3000spectra would take about 42 days worth of CPU time, while the stacked RSS files have at most 381 spectra, making a single IFU’s analysis at most a 5 days project. Eliminating the compromises could increase execution time by factors of several, making even the latter analysis difficult for any reasonable size sample of galaxies.

There are some principled reasons as well to prefer the RSS files. The data cubes are generated from the RSS data via an interpolation scheme which can’t add to the information content but does introduce spatial covariance into any derived data. This should properly be modeled, but in practice usually isn’t — the MaNGA science team recommends inflating variance estimates by some factor. For many analyses it’s necessary to bin spaxels, and again the spatial covariance should properly be accounted for. The fiber spectra on the other hand can be treated as independent samples. The one significant disadvantage of working with the RSS data is that new visualization tools are needed. I’ve developed a number of such tools, and continue to work on them.

This post is longer than I originally intended, so I will save results for the next one and then move on to discussing one use of velocity data.

SDSS

As I mentioned in the first post I make extensive use of spectroscopic data products from the Sloan Digital Sky Survey (SDSS), including the single fiber galaxy spectra from the legacy survey and more recently "Integral Field Unit" (IFU) spectra from SDSS-IV MaNGA. For completeness here are links to recent data release and technical overview papers, and to the acknowledgements page. I hope links will suffice.

DR12 release paper (the final SDSS-III data release): Alam et al. 2015

DR13 release: Albareti et al. 2016

DR14 release: Abolfathi et al. 2017

MaNGA papers

General overview: Bundy et al. 2015

Sample and observing strategy: Law et al. 2015 and Wake et al. 2017

Data products: Law et al. 2016

Acknowledgments

SDSS-III acknowledgment

SDSS-IV acknowledgment

 

Getting started

I have a long standing interest in the star formation histories of galaxies, or perhaps more precisely what it’s possible to learn about the star formation histories of unresolved  galaxies from spectroscopic snapshots. This has been a topic of great interest to astronomers for as long as they’ve understood the true nature of galaxies, yet somewhat surprisingly this is still a relatively immature field — evidence for that is that the epistemic question (what is it possible to learn?) still has no definite answer. And that’s a good thing because it means there’s still room for creative exploration.

Why am I starting a blog and why now? This is a hobby for me. An eccentric hobby to be sure, but it keeps me writing software and intellectually stimulated. I got actively interested in this around the time of SDSS data release 9 (so, late 2012) when I belatedly realized massive amounts of spectroscopic data were available to anyone with an internet connection. A bit of searching turned up the other main ingredient needed to analyze spectroscopic data in the form of libraries of “simple stellar population” models. I’ve mostly used various iterations of the MILES SSP models, but I’ve also worked with subsets of the BC03 and CB07 models, and one or two others. I’ll get around to some comparative analyses of these sometime later.

My modeling efforts started out fairly simple — at the most basic level fitting spectra involves non-negative least squares, for which there is a very effective algorithm. This was already known from the work of Cappellari and Emsellem (2004) [1] and my optimization code still only provides a subset of the capabilities of Cappellari’s ppxf. However I had something of an epiphany a few years ago when I discovered it’s possible to do what Conroy calls non-parametric star formation history modeling with fully Bayesian techniques. Now this is no huge scientific breakthrough — it was mostly a lucky find of the right sampling strategy and implementation in the probabilistic modeling language Stan, but it’s a useful technical advance, and to the best of my knowledge I am the first to do this. All other Bayesian SED modeling codes that I’m aware of use a parametric functional form for specifying star formation histories.

One of the good things about doing something like this as an outsider hobbyist is there’s no pressure to present or publish results. The not so good thing about it is that without ready access to professional conferences and journals it’s difficult to communicate results. Writing papers seems like an unproductive exercise[2] — they’re tedious to write and equally tedious to read. Also, I never seem to reach a definite end state. I’m always modifying code, getting new ideas about what’s interesting to look at, and trying different visualization tools.

Around the same time I was starting this activity I discovered the original Galaxy Zoo forum and Galaxy Zoo itself. I’ve posted in the forum and the various versions of Talk for some time and even contribute clicks now and then. I was also one of the last “citizen scientists” (I really don’t like that term) to throw in the towel on the failed Galaxy Zoo Quench project. Posting on GZ Talk can be a frustrating experience. Posts quickly get lost in the noise and it’s difficult to write even mildly technical content because of software limitations including a very limited Markdown implementation in the current version. Pictures or graphical content also must be uploaded to a separate server and hotlinked, which adds extra steps and increases the risk of link rot. The engagement of the GZ science team with the volunteer community seems to be haphazard at best, and only a small percentage of the volunteers appear to have the technical background to understand what I’m writing.

So, I decided to start another blog. This may go largely unread too, but that’s OK. Consider it a diary that I’ve left out in the open.

My current plan is to post an occasional case study in the form of a markdown notebook. I intend to make these fully reproducible, which entails among other things making all code and data publicly available. Like probably most recreational programmers I’m not very good at it. Right now my code is scattered over several source files, with no documentation other than an occasional comment, and no longer used functions mixed in with current code. There are potential readability issues: to mention one example it’s completely legal and fairly common practice in R to use periods in variable and function names. I’ve been trying to get away from doing that by using underscores instead, but I’m not fully consistent about it. In any case I will eventually get the code cleaned up, properly version controlled, and placed on Github.

Most of my current code other than the Stan models is written in R. Why R? I discovered it many years ago while working on a data analysis task completely unrelated to galaxies. From hazy memory R was in about version 1.4.x at the time, which would make it around 2002. After 16 years of experience and authoring several R packages I’m reasonably proficient at it. If I were starting from scratch though, particularly for the kinds of modeling I plan to discuss here, I’d probably use Python. I do plan a Python port, but it may be more than a few month’s project.

While I’m mostly interested in star formation history modeling I plan to look at other things as well. Most likely I will write first about kinematics and disk galaxy rotation curves, since I sorta offered to “write something up” on the subject. I will probably delve into related topics in Bayesian statistics too. I may occasionally say something about current literature when I see a paper on arxiv that’s either interesting or conspicuously bad. I may consider inviting additional contributors. I promise never to write anything about voor-whatevers.


[1] The use of quadratic programming techniques for stellar population modeling actually predates this paper by over 3 decades (Lasker 1970, Faber 1972).

[2] I’ve written one anyway. If you’re really interested it’s archived here. If I actually wanted to publish this I would get rid of most of the workflow discussion and focus on the Bayesian modeling aspect. I’d also make sure to publish software simultaneously. I might choose a different case study. Also, all of my code has changed since then and I’ve disproved some of my speculations. For example priors don’t appear to be very influential. The choice of SSP library on the other hand…