Do the details of the line of sight velocity distribution matter for star formation history modeling? Probably not much.

I decided to try a set of models for one galaxy – NGC 4889 (with MaNGA plateifu 8479-12701), which had the highest overall velocity dispersion of the Coma sample I’ve been discussing in the last several posts. It also has some evidence for multiple kinematic components which isn’t too much of a surprise since it’s one of the central cD galaxies in Coma. The SSP model spectra fed to the SFH models were preconvolved with the element wise means of the LOSVD convolution kernels from the velocity distribution modeling exercise. Again, this is an expedient to avoid what could otherwise be prohibitively computationally expensive. The models I ran were the same as described back in this post — these ignore emission but do model dust attenuation with the usual modified Calzetti attenuation relation.

To get quickly to the results, here are model star formation histories compared to the previous runs that used the full model in its current form. Usually I like these plots of results from all spectra in an IFU, but in this one all 381 spectra met my S/N criterion, so the plot is pretty crowded. You really need to see it live on a 4K monitor to see the details.

NGC 4889 (MaNGA plateifu 8479-12701) Model star formation histories for all spectra, runs with non-parametric LOSVD vs. single gaussian stellar velocity dispersions

Well it’s pretty hard to see but differences in model SFH’s are mostly in the youngest age bins, which are very poorly constrained anyway in these presumably passively evolving galaxies. Here’s a closer look at a single model run that had the largest difference in estimated stellar mass density (more on this right below) of about 0.19 dex:

mgh_comparison_8479-12701
NGC 4889 (MaNGA plateifu 8479-12701) Model mass growth histories for a single spectrum – runs with non-parametric LOSVD vs. single Gaussian stellar velocity dispersion

So, the difference in star formation histories was slower mass build up between about 12-5 Gyr look back times in the second run, which was responsible for the lower current day stellar mass density. How this resulted from the choice of LOSVD is not at all obvious.

Let’s look at a few summary results. First, the model stellar mass surface densities:

sigma_mstar_comparison_8479-12701
NGC 4889 (MaNGA plateifu 8479-12701) Model ΣM* – runs with non-parametric LOSVD vs. single Gaussian stellar velocity dispersion

These fall on an almost exactly one to one relation with a few hands full of outliers. Oddly these are mostly in the higher signal to noise area of the IFU (i.e. near the center).

Results for star formation rate density and specific star formation rate are even more consistent between runs, with essentially no differences larger than the nominal 1 σ error bars.

sigma_sfr_comparison_8479-12701
NGC 4889 (MaNGA plateifu 8479-12701) Model Σsfr – runs with non-parametric LOSVD vs. single Gaussian stellar velocity dispersion
ssfr_comparison_8479-12701
NGC 4889 (MaNGA plateifu 8479-12701) Model SSFR – runs with non-parametric LOSVD vs. single Gaussian stellar velocity dispersion

One problem I encountered was that I had to re-run some models either for technical reasons or because of obvious convergence failures. I suspect there could have been some convergence issues in both sets of runs and am slightly worried that could be the source of the few differences in summary measures seen. Oddly, there were almost no suspicious convergence diagnostics in either set of runs (once the latter were run to satisfactory conclusion), and Stan is quite aggressive about reporting possible convergence issues.

Anyway, modeling kinematics remains an interesting topic to me, but it seems somewhat decoupled from modeling star formation histories. Right now I’m waiting for the final SDSS data release to decide what projects I want to tackle.

I’m going to end with a couple of asides. First, I recognize that all of these error bars are overoptimistic, maybe by a lot. The main reason, I think, is that I treat the flux values as independent which they clearly are not1 this is pretty standard practice however, which effectively results in overestimating the sample size. One possible partial solution is to allow the flux uncertainties to vary from their nominal values by, for example, a factor > 1. This would involve adding as few as one parameter to the models, which is something I’ve actually tried in the past. I may relook at that.

One interesting feature of the previous two graphs is the rather obvious systematic trend with radius of both SFR density and specific star formation rate, as shown more directly below taken from the first set of model runs:

sfr_ssfr_d_8479-12701
NGC 4889 (MaNGA plateifu 8479-12701) ΣSFR and SSFR vs. distance from IFU center

Are these real trends? I don’t know, but I don’t see an obvious reason why they might be spurious features of the models. In normal star forming galaxies I encounter trends with radius in both directions and sometimes no trend at all.

As a final and related aside there was a paper by Sedgwick et al. that showed up on arxiv not long ago that presented estimates of star formation rates of early type galaxies from observations of core collapse supernovae carefully matched to host galaxies with high confidence morphological classifications. To oversimplify their conclusions they found that typically massive ellipticals might have specific star formation rates ∼ 10-11 / yr, which is somewhat higher than usually supposed. As I mentioned in my last post my models will always have some contribution from young stars and I typically get central estimates of SSFR > ∼ 10-11.5 even in galaxies with no hint of emission (as is the case with this Coma sample). This particular galaxy has a total stellar mass within the IFU of ∼ 1011.5 M , so it could be forming stars at a rate of ∼ 1 M / yr.

Well, I think I have one more post to write before the SDSS DR17 release.

Confronting SFH models with observables – some results for normal disk galaxies

I’ve posted versions of some of these graphs before for both individual galaxies and a few larger samples, but I think they’ve all been unusual ones. I recently managed to complete model runs on 40 of the spirals from the normal barred and non-barred sample I discussed back in this post. The 20 barred and 20 non-barred galaxies in the sample aren’t really enough to address the results in the paper by Fraser-McKelvie that was the starting point for my investigation and more importantly the initial sample was chosen entirely at my whim. Unfortunately I don’t have the computer resources to analyze more than a small fraction of MaNGA galaxies. The sampling part of the modeling process takes about 15 minutes per spectrum on my 16 core PC (which is a huge improvement) and there are typically ~120 binned spectra per galaxy, so it takes ~30 hours per galaxy with one PC running at full capacity. I should probably take up cryptocurrency mining instead.

This sample comprises 5086 model runs with 2967 spectra of non-barred and 2119 of barred spirals. For some of the plots I’ll add results for 3348 spectra of 33 passively evolving Coma cluster galaxies.

Anyway, first: the modeled star formation rate density versus the rate predicted from the Hα luminosity density, which is easily the most widely used star formation rate calibrator at optical wavelengths. The first plot below shows all spectra with estimates for both values. Red dots are (non-barred) spirals, blue are barred. Both sets of quantities have uncertainties calculated, but I’ve left off error bars for clarity. Units on both axes are log10(M/yr/kpc2). I adopted the relation log(SFR) = log(L) – 41.26 from a review by Calzetti (2012), which is the straight line in these graphs. That calibration is traceable back to Kennicutt (1983), which as far as I know has never been revisited except for small adjustments to account for changing fashions in assumed stellar initial mass functions. In the left panel of the plot below Hα is uncorrected for attenuation. In the right it’s corrected using the modeled stellar attenuation, which as I noted some time ago will systematically underestimate the attenuation in H II regions. Not too surprisingly almost all points lie above the calibration line — the SFH models include a treatment of attenuation that might be too simple but still does make a correction for starlight lost to dust. The more important observation though is there’s a pretty tight relationship between modeled SFR density and estimated Hα luminosity density that holds over a nearly 3 order of magnitude range in both. The scatter around a simple regression line in the graphs below is about 0.2 dex. It’s not really evident on visual inspection but the points do shift slightly to the right in the right hand plot and there’s also a very slight reduction in scatter. These galaxies are actually not especially dusty, with an average model optical depth of around 0.25 (which corresponds to E(B-V) ≈ 0.07).

sfr_ha_40spirals
SFR density vs. prediction from Hα luminosity for 40 normal spirals. (L) Hα luminosity uncorrected for attenuation. (R) Hα corrected using estimated attenuation of stellar component.

To take a more refined look at this I limited the sample to regions with star forming emission line ratios using the standard BPT diagnostic based on [O III]/Hβ vs. [N II]/Hα. I require at least a 3σ detection in each line to make a classification, so besides limiting the analysis to regions that are in fact (I hope) forming stars it allows correcting Hα attenuation for the observed Balmer decrement since Hβ is by construction at least nominally detected. Now we get the results shown in the plot below. Units and symbols are as before. Hα luminosity is corrected using the Balmer decrement assuming an intrinsic ratio of 2.86 and the same attenuation curve shape as returned by the model. The SFR-Hα calibration line is the thick red one. The blue lines with grey ribbons are from “robust” simple regressions using the function lmrob in the R package robustbase1Correcting for attenuation produced a few significant outliers that bias an ordinary least squares fit and although it’s not specifically intended for measurements with errors this function seems to do a little better than either ordinary or weighted least squares.

Model estimates of star formation rate density vs. SFR predicted from Hα luminosity density.

So the model SFR density straddles the calibration line, but with a distinct tilt — regions with relatively low Hα luminosity have higher than expected star formation. To quantify this here is the output from the function lmrob:

Call:
lmrob(formula = sigma_sfr_m ~ sigma_sfr_ha, data = df.sfr)
 \--> method = "MM"
Residuals:
      Min        1Q    Median        3Q       Max 
-3.862996 -0.142375  0.004122  0.137030  1.305471 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -0.174336   0.019224  -9.069   <2e-16 ***
sigma_sfr_ha  0.785954   0.009948  79.008   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Robust residual standard error: 0.2097 
Multiple R-squared:  0.7402,	Adjusted R-squared:  0.7401 
Convergence in 10 IRWLS iterations

Robustness weights: 
 6 observations c(781,802,933,941,2121,2330) are outliers with |weight| = 0 ( < 3.8e-05); 
 223 weights are ~= 1. The remaining 2424 ones are summarized as
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
 0.0107  0.8692  0.9525  0.9020  0.9854  0.9990 

I also ran my Bayesian measurement error model on this data set and got the following estimates for the intercept, slope, and residual standard deviation:


         mean      se_mean          sd       2.5%        25%        50%        75%      97.5%    n_eff      Rhat
b0 -0.1942387 1.943297e-04 0.018346806 -0.2312241 -0.2063781 -0.1943811 -0.1819499 -0.1589849 8913.379 0.9997482
b1  0.7767853 9.828814e-05 0.009436693  0.7579702  0.7706115  0.7768086  0.7830051  0.7949343 9218.014 0.9995628
s   0.2044701 3.837428e-05 0.003319280  0.1981119  0.2021872  0.2043949  0.2067169  0.2110549 7481.821 0.9997152

Almost the same! So, how to interpret that slight “tilt”? The obvious comment is that the model results probe a very different time scale — by construction 100 Myr — than Hα (5-10 Myr). As a really toy model consider an isolated, instantaneous burst of star formation. As the population ages its star formation rate will be calculated to be constant from its birth up until 100 Myr when it drops to 0, while its emission line luminosity declines steadily. So its trajectory in the plot above will be horizontally from right to left until it disappears. In fact in spiral galaxies in the local universe star formation is generally localized, usually along the leading edges of arms in grand design spirals. Slightly older populations will be more dispersed.

This can be seen pretty clearly in the SFR maps for two galaxies from this sample below. In both cases regions with high star formation rate track the spiral arms closely, but are more diffuse than regions with high Hα luminosity.

Second topic: the spectral region around the 4000Å “break” has long been known to be sensitive to stellar age. Its use as a quantitative specific star formation rate indicator apparently dates to Brinchmann et al. (2004)2They don’t cite any antecedents and I can’t find any either.. More recently Bluck et al. (2020) used a similar technique at the sub-galactic level on MaNGA galaxies. Both studies use D4000 as a secondary star formation rate indicator, preferring Hα luminosity as the primary SFR calibrator with D4000 reserved for galaxies (or regions) with non-starforming emission line ratios or lacking emission. Oddly, I have been unable to find an actual calibration formula in a slightly better than cursory search of the literature — both of the cited papers present schematic graphs with overlaid curves giving the adopted relationships and approximate uncertainties. The Brinchmann version from the published paper is copied and pasted below.

In the two graphs below I’ve added data from the passively evolving Coma cluster sample comprising 3348 binned spectra in 33 galaxies. There are two versions of the same graphs. Individual points are displayed in the first, as before with error bars suppressed to aid (slightly) clarity. The second displays the density of points at arbitrarily spaced contour intervals. The straight line is the “robust” regression line calculated for the spiral sample only, which for the sake of completeness is

\( \log10(sSFR) = -7.11 (\pm 0.02) – 2.11 (\pm 0.015) D_n(4000)\)
d4000_ssfr_40spirals_asscatter
Model sSFR vs. measured value of D4000. 40 barred and non-barred spirals + 33 passively evolving Coma cluster galaxies.
Model sSFR vs. measured value of D4000. 40 barred and non-barred spirals + 33 passively evolving Coma cluster galaxies.
Model sSFR vs. measured value of D4000 (2D density version). 40 barred and non-barred spirals + 33 passively evolving Coma cluster galaxies.

Call:
lmrob(formula = ssfr_m ~ d4000_n, data = df.ssfr)
 \--> method = "MM"
Residuals:
       Min         1Q     Median         3Q        Max 
-0.9802409 -0.0916555 -0.0005187  0.0962981  7.1748499 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -7.10757    0.02009  -353.8   <2e-16 ***
d4000_n     -2.10894    0.01418  -148.7   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Robust residual standard error: 0.1384 
Multiple R-squared:  0.9043,	Adjusted R-squared:  0.9043 
Convergence in 13 IRWLS iterations

Robustness weights: 
 39 observations c(45,958,1003,1165,1200,1230,1249,1279,1280,1281,1282,1283,1294,1298,1299,1992,2040,2047,2713,2722,2723,2729,2735,2736,2974,3212,3226,3250,3667,3668,3671,3677,3685,3687,3688,3691,4056,4058,4083)
	 are outliers with |weight| <= 1.1e-05 ( < 2.1e-05); 
 418 weights are ~= 1. The remaining 4310 ones are summarized as
     Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
0.0001994 0.8684000 0.9514000 0.8911000 0.9850000 0.9990000 
The relation between D4000 and sSFR as estimated by Brinchmann et al. 2004

All three groups follow the same relation but with some obvious differences in distribution. The non-barred spiral sample extends to higher star formation rates (either density or sSFR) than barred spirals, which in turn extend into the passively evolving range. The Coma cluster sample has a long tail of high D4000 values (or high specific star formation rates at given D4000) — this is likely because D4000 becomes sensitive to metallicity in older populations and this sample contains some of the most massive (and highest metallicity) galaxies in the local universe. Also, as I’ve noted before these models “want” to produce a smoothly varying mass growth history, which means that even the reddest and deadest elliptical will have some contribution from young populations. This seems to put a floor on modeled specific SFR of ∼10-11.5 yr-1.

Just to touch briefly on the paper by Fraser-McKelvie et al. barred spirals in this sample do have lower overall star formation than non-barred, with large areas in the green valley or even passively evolving. This sample is too incomplete to say much more. For the sake of having a visualization here is the spatially resolved ΣSFR vs. ΣM* relation. The dashed line is Bluck’s estimate of the star forming “main sequence,” which looks displaced downward compared to my estimates.

mstar_sfr_40spirals+33etg
Model SFR density vs. stellar mass density. 40 barred and non-barred spirals + 33 passively evolving Coma cluster galaxies.

Finally, here are a couple of grand design spirals, one barred and one (maybe) not to illustrate how model results track morphological features. In the barred galaxy note that the arms are clearly visible in the SFR maps but they aren’t visible at all in the stellar mass map, which does show the presence of the very prominent bar.

NGC 6001 – thumbnail with MaNGA IFU footprint
NGC 6001 (MaNGA plateifu 9041-12701) (L) Model SFR surface density (M) Hα luminosity density (R) sSFR
NGC 5888- thumbnail with MaNGA IFU footprint
NGC 5888 (MaNGA plateifu 9871-12702) (L) Model SFR surface density (M) Hα luminosity density (R) sSFR
9871-12702_stellar_mass_density
NGC 5888 (MaNGA plateifu 9871-12702) – Log model stellar mass density (Msun/kpc2

I’m not sure how much more I’m going to do with normal spirals. As I’ve said repeatedly the full sample is much too large for my computing resources.

Next time (probably) I’m going to return to a very small sample of post-starburst galaxies, which I may also return to when the final SDSS public data is released.

Update on Bayesian line of sight velocity modeling

Well that was simple enough. I made a simple indexing error in the R data preprocessing code that resulted in a one pixel offset between the template and galaxy spectra, which effectively resulted in shifting the elements of the convolution kernel by one bin. I had wanted to look at a rotating galaxy to perform some diagnostic tests, but once I figured out my error this turned out to be a pretty good validation exercise. So I decided to make a new post. The galaxy I’m looking at is NGC 4949, another member of the sample of passively evolving Coma cluster galaxies of Smith et al. It appears to me to be an S0 and is a rapid rotator:

NGC 4949 – SDSS image
NGC 4949 – radial velocity map

These projected velocities are computed as part of my normal workflow. I may in a future post explain in more detail how they’re derived, but basically they are calculated by finding the best redshift offset from the system redshift (taken from the NSA catalog which is usually the SDSS spectroscopic redshift) to match the features of a linear combination of empirically derived eigenspectra to the given galaxy spectrum.

First exercise: find the line of sight velocity distribution after adjusting to the rest frame in each spectrum. This was the originally intended use of these models. This galaxy has fairly low velocity dispersion of ~100 km/sec. so I used a convolution kernel size of just 11 elements with 6 eigenspectra in each fit. Here is a summary of the LOSVD distribution for the central spectrum. This is much better. The kernel estimates are symmetrical and peak on average at the central element. The mean velocity offset is ≈ 9.5 km/sec, which is much closer to 0 than in the previous runs. I will look briefly at velocity dispersions at the end of the post: this one is actually quite close to the one I estimate with a single component gaussian fit (116 km/sec vs 110).

Estimated LOSVD of central spectrum of NGC 4949

Next, here are the posterior mean velocity offsets for all 86 spectra in the Voronoi binned data, plotted against the peculiar velocity calculated as outlined above. The overall average of the mean velocity offsets is 4.6 km/sec. The reason for the apparent tilt in the relationship still needs investigation.

Mean velocity offset vs. peculiar velocity. All NGC 4949 spectra.

Exercise 2: calculate the LOSVD with wavelengths adjusted to the overall system redshift as taken from the NSA catalog, that is no adjustment is made for peculiar redshifts due to rotation. For this exercise I increased the kernel size to 17 elements. This is actually a little more than needed since the projected rotation velocities range over ≈ ± 100 km/sec. First, here is the radial velocity map:

Radial velocity map from Bayesian LOSVD model with no peculiar redshifts assigned.

Here’s a scatterplot of the velocity offsets against peculiar velocities from my normal workflow. Again there’s a slight tilt away from a slope of 1 evident. The residual standard error around the simple regression line is 6.4 km/sec and the intercept is 4 km/sec, which are consistent with the results from the first set of LOSVD models.

Velocity offsets from Bayesian LOSVD models vs. peculiar velocities

Exercise 3: calculate redshift offsets using a set of (for this exercise, 6) eigenspectra from the SSP templates. Here is a scatterplot of the results plotted against the redshift offsets from my usual empirically derived eigenspectra. Why the odd little jumps? I’m not completely sure, but my current code does an initial grid search to try to isolate the global maximum likelihood, which is then found with a general purpose nonlinear minimizer. The default grid size is 10-4, about the size of the gaps. Perhaps it’s time to revisit my search strategy.

Redshift offsets from a set of SSP derived eigenspectra vs. the same routine using my usual set of empirically derived eigenspectra.

Final topic for now: I mentioned in the last post that posterior velocity dispersions (measured by the standard deviation of the LOSVD) were only weakly correlated with the stellar velocity dispersions that I calculate as part of my standard workflow. With the correction to my code the correlation while still weak has greatly improved, but the dispersions are generally higher:

Velocity dispersion form Bayesian LOSVD models vs. stellar velocity dispersion from maximum likelihood fits.

A similar trend is seen when I plot the velocity dispersions from the LOSVD models with correction only for the system redshift and a wider convolution kernel (exercise 2 above) with the fully corrected model runs (exercise 1):

These results hint that the diffuse prior on the convolution kernel is responsible for the different results. As part of the maximum likelihood fitting process I estimate the standard deviation of the stellar velocity distribution assuming it to be a single component gaussian. While the distribution of kernel values in the first graph look pretty symmetrical the tails are on average heavier than a gaussian. This can be seen too in the LOSVD models with the larger convolution kernel of exercise 2. The tails have non-negligible values all the way out to the ends:

Now, what I’m really interested in are model star formation histories. I’ve been using pre-convolved SSP model templates from the beginning along with phony emission line spectra with gaussian profiles with some apparent success. My plan right now is to continue that program with these non-parametric LOSVD’s. The convolutions could be carried out with posterior means of the kernel values or by drawing samples. Repeated runs could be used to estimate how much variation is affected by uncertainty in the kernel.

How to handle emission lines is another problem. For now stepping back to a simpler model (no emission, no dust) would be reasonable for this Coma sample.

Multithreading comes to Stan

It’s been available for a while actually, but not in a form I’ve been able to figure out how to use. That changed recently with the introduction of the reduce_sum “higher order function” in Stan 2.23.0. What reduce_sum does is allow data to be partitioned into conditionally independent slices that can be dispatched to parallel threads if the Stan program is compiled with threading enabled. My SFH modeling code turns out to be an ideal candidate for parallelization with this technique since flux errors given model coefficients and predictor values are treated as independent1This is an oversimplification since there is certainly some covariance between nearby wavelength bins. This is rarely if ever modeled though and I haven’t figured out how to do it. In any case adding covariance between measurements would complicate matters but not necessarily make it impossible to partition into independent chunks. Here is the line of Stan code that defines the log-likelihood for the current version of my model (see my last post for the complete listing):

    gflux ~ normal(a * (sp_st*b_st_s) .* calzetti_mod(lambda, tauv, delta) 
                    + sp_em*b_em, g_std);

This just says the galaxy flux values are drawn from independent gaussians with mean given by the model and known standard deviations. Modifying the code for reduce_sum requires a function to compute a partial sum over any subset of the data for, in this case, the log-likelihood. The partial sum function has to have a specific but fairly flexible function signature. One thing that confused me initially is that the first argument to the partial sum function is a variable to be sliced over that, according to the documentation, must be an array of any type. What was confusing was that passing a variable of vector type as the first argument will fail to compile. In other words a declaration in the data block like

vector[N] y;

with, in the functions block:

real partial_sum(vector y, ...);

fails, while

real y[N];

real partial_sum(real[] y, ...);

works. The problem here is that all of the non-scalar model parameters and data are declared as vector and matrix types because the model involves vector and matrix operations that are more efficiently implemented with high level matrix expressions. Making gflux the sliced variable, which seemed the most obvious candidate, won’t work unless it’s declared as real[] and that won’t work in the sampling statement unless it is cast to a vector since the mean in the expression at the top of the post is a vector. Preliminary investigation with a simpler model indicated that approach works but is so slow that it’s not worth the effort. But, based on a suggestion in the Stan discussion forum a simple trick solved the issue: declare a dummy variable with an array type, pass it as the first argument to the partial sum function, and just don’t use it. So now the partial sum function, which is defined in the functions block is

  real sum_ll(int[] ind, int start, int end, matrix sp_st, matrix sp_em, 
              vector gflux, vector g_std, vector lambda, real a, real tauv, real delta, 
              vector b_st_s, vector b_em) {
    return normal_lpdf(gflux[start:end] | a * (sp_st[start:end, :]*b_st_s) .* calzetti_mod(lambda[start:end], tauv, delta) 
                    + sp_em[start:end, :]*b_em, g_std[start:end]);
  }

I chose to declare the dummy variable ind in a transformed data block. I also declare a tuning variable grainsize there and set it to 1, which tells Stan to set the slice size automatically at run time.

transformed data {
  int grainsize = 1;
  int ind[nl] = rep_array(1, nl);
}

Then the sampling statement in the model block is

    target += reduce_sum(sum_ll, ind, grainsize, sp_st, sp_em, gflux, g_std, lambda,
                          a, tauv, delta, b_st_s, b_em);

The rest of the Stan code is the same as presented in the last post.

So why go to this much trouble? As it happens a couple years ago I built a new PC with what was at the time Intel’s second most powerful consumer level CPU, the I9-7960X, which has 16 physical cores and supports 32 threads. I’ve also installed 64GB of DRAM (recently upgraded from 32), which is overkill for most applications but great for running Stan programs which can be rather memory intensive. Besides managing my photo collection I use it for Stan models, but until now I haven’t really been able to make use of its full power. By default Stan runs 4 chains for MCMC sampling, and these can be run in parallel if at least 4 cores are available. It would certainly be possible to run more than 4 chains but this doesn’t buy much: increasing the effective sample size by a factor of 2 or even 4 doesn’t really improve the precision of posterior inferences enough to matter. Once I tried running 16 chains in parallel with a proportionate reduction in post-warmup iterations, but that turned out to be much slower than just using 4 chains. The introduction of reduce_sum raised at least the possibility of making full use of my CPU, and in fact working through the minimal example in Stan’s library of case studies indicated that close to factor of 4 speedups with 4 chains and 4 threads per chain are achievable. I got almost no further improvement setting the threads per chain to 8 and thus using all virtual cores, which apparently is expected at least with Intel CPUs. I haven’t yet tried other combinations, but using all physical cores with the default 4 chains seems likely to be close to optimal. I also haven’t experimented much with the single available tuning parameter, the grainsize. Setting it equal to the sample size divided by 4 and therefore presumably giving each thread an equal size slice did not work better than letting the system set it, in fact it was rather worse IIRC.

I’ve run both the threaded and unthreaded models on one complete set of data for one MaNGA galaxy. This particular data set used the smallest, 19 fiber IFU, which produces 57 spectra in the stacked RSS data. This was binned to 55 spectra, all with SNR > 5. The galaxy is a passively evolving S0 in or near the Coma cluster, but I’m not going to discuss its star formation history in any detail here. I’m only interested in comparative runtimes and the reproducibility of sampling. And here is the main result of interest, the runtimes (for sampling only) of the threaded and unthreaded code. All runs used 4 chains with 250 warmup iterations and 750 post-warmup. I’ve found 250 warmup iterations to be sufficient almost always and 3000 total samples is more than enough for my purposes. This is actually an increase from what had been my standard practice.

exectime_threaded_unthreaded
Total execution time for sampling (warmup + post-warmup) in the slowest chain. Graph shows multithreaded vs. threaded time on the same data (N=55 spectra). All threaded runs used 4 chains and 4 threads per chain on a 16 core CPU.

On average the threaded code ran a factor of 3.1 times faster than the unthreaded, and was never worse than about a factor 2 faster2but also never better than a factor 3.7 faster. This is in line with expectations. There’s some overhead involved in threading so speedups are rarely proportional to the number of threads. This is also what I found with the published minimal example and with a just slightly beyond minimal multiple regression model with lots of (fake) data I experimented with. I’ve also found the execution time for these models, particularly in the adaptation phase, to be highly variable with some chains occasionally much faster or slower than others. Stan’s developers are apparently working on communicating between chains during the warmup iterations (they currently do not) and this might reduce between chain disparities in execution time in the future.

I’m mostly interested in model star formation histories, so here’s the comparison of star formation rate densities for both sets of model runs on all 55 bins, ordered by distance from the IFU center (which coincides with the galaxy nucleus):

sfr_all_8950-1902
Star formation rate density. Estimates are for threaded and unthreaded model runs.

Notice that all the ribbons are purple (red + blue), indicating the two sets of runs were nearly enough identical. Exact reproducibility is apparently hard to achieve in multithreaded code, and I didn’t try. I only care about reproducibility within expected Monte Carlo variations, and that was clearly achieved here.

There are only a few down sides to multithreading. The main one is that rstan is currently several dot releases behind Stan itself and seems unlikely to catch up to 2.23.x before 2021. I do extensive pre- and post-processing in R and all of my visualization tools use ggplot2 (an R package) and various extensions. At present the best way to integrate R with a current release of Stan is a (so far) pre-release package called cmdstanr, which bills itself as a “lightweight interface to Stan for R users.” What it actually is is an interface between R and cmdstan, which is in turn the command line driven interface to Stan.

By following directions closely and with a few hints from various sources I was able to install cmdstan from source and with threading support in Windows 10 using Rtools 4.0. The development version of cmdstanr is trivial to install and seems to work exactly as advertised. It is not a drop in replacement for rstan though, and this required me to write some additional functions to fit it into my workflow. Fortunately there’s an rstan function that reads the output files from cmdstan and creates a stanfit object, and this enables me to use virtually all of my existing post-processing code intact.

The other consequence is that cmdstan uses R dump or json format files for input and outputs csv files, the inputs have to be created on the fly, and the outputs have to be read into memory. Cmdstanr with some help from rstan handles this automatically, but there’s some additional overhead compared to rstan which AFAIK keeps all data and sampler output in memory. On my machine the temporary directories reside on a SSD, so all this file I/O goes fairly quickly but still slower than in memory operations. I also worry a bit about the number of read-write cycles but so far the SSD is performing flawlessly.

Multithreading is beneficial to the extent that physical cores are available. This is great for people with access to HPC clusters and it’s helpful to me with an overspec’d and underutilized PC. My current Linux box, which is really better suited to this type of work, only has a several generations old Intel I7 with 4 cores. It’s still a competent and highly reliable machine, but it’s not likely to see any performance improvement from multithreading. Fortunately the current trend is to add more cores to mainstream CPUs: AMD’s Ryzen line currently tops out at 16 cores and their Threadripper series have 24-64 cores. This would be a tempting next build, but alas I’m trying not to spend a lot of money on hobbies these days for more or less obvious reasons.

Continue reading “Multithreading comes to Stan”

Markarian 848 – Closing topics

I’m going to close out my analysis of Mrk 848 for now with three topics. First, dust. Like most SED fitting codes mine produces an estimate of the internal attenuation, which I parameterize with τV, the optical depth at V assuming a conventional Calzetti attenuation curve. Before getting into a discussion for context here is a map of the posterior mean estimate for the higher S/N target binning of the data. For reference isophotes of the synthesized r band surface brightness taken from the MaNGA data cube are superimposed:

mrk848_tauv_map
Map of posterior mean of τV from single component dust model fits with Calzetti attenuation

This compares reasonably well with my visual impression of the dust distribution. Both nuclei have very large dust optical depths with a gradual decline outside, while the northern tidal tail has relatively little attenuation.

The paper by Yuan et al. that I looked at last time devoted most of its space to different ways of modeling dust attenuation, ultimately concluding that a two component dust model of the sort advocated by Charlot and Fall (2000) was needed to bring results of full spectral fitting using ppxf on the same MaNGA data as I’ve examined into reasonable agreement with broad band UV-IR SED fits.

There’s certainly some evidence in support of this. Here is a plot I’ve shown for some other systems of the estimated optical depth of the Balmer emission line emitting regions based on the observed vs. theoretical Balmer decrement (I’ve assumed an intrinsic Hα/Hβ ratio of 2.86 and a Calzetti attenuation relation) plotted against the optical depth estimated from the SFH models, which roughly estimates the amount of reddening needed to fit the SSP model spectra to the observed continuum. In some respects this is a more favorable system than some I’ve looked at because Hβ emission is at measurable levels throughout. On the other hand there is clear evidence that multiple ionization mechanisms are at work, so the assumption of a single canonical value of Hα/Hβ is likely too simple. This might be a partial cause of the scatter in the modeled relationship, but it’s encouraging that there is a strong positive correlation (for those who care, the correlation coefficient between the mean values is 0.8).

The solid line in the graph below is 1:1. The semi-transparent cloud of lines are the sampled relationships from a Bayesian errors in variables regression model. The mean (and marginal 1σ uncertainty) is \(\tau_{V, bd} = (0.94\pm 0.11) + (1.21 \pm 0.12) \tau_V\). So the estimated relationship is just a little steeper than 1:1 but with an offset of about 1, which is a little different from the Charlot & Fall model and from what Yuan et al. found, where the youngest stellar component has about 2-3 times the dust attenuation as the older stellar population. I’ve seen a similar not so steep relationship in every system I’ve looked at and don’t know why it differs from what is typically assumed. I may look into it some day.

τV estimated from Balmer decrement vs. τV from model fits. Straight line is 1:1 relation. Cloud of lines are from Bayesian errors in variables regression model.

I did have time to run some 2 dust component SFH models. This is a very simple extension of the single component models: a single optical depth is applied to all SSP spectra. A second component with the optical depth fixed at 1 larger than the bulk value is applied only to the youngest model spectra, which recall were taken from unevolved SSPs from the updated BC03 library. I’m just going to show the most basic results from the models for now in the form of maps of the SFR density and specific star formation rate. Compared to the same maps displayed at the end of the last post there is very little difference in spatial variation of these quantities. The main effect of adding more reddened young populations to the model is to replace some of the older light — this is the well known dust-age degeneracy. The average effect was to increase the stellar mass density (by ≈ 0.05 dex overall) while slightly decreasing the 100Myr average SFR (by ≈ 0.04 dex), leading to an average decrease in specific star formation rate of ≈ 0.09 dex. While there are some spatial variations in all of these quantities no qualitative conclusion changes very much.

mrk848_sigma_sfr_sfr_2dust_maps
Results from fits with 2 component dust models. (L) SFR density. (R) Specific SFR

Contrary to Yuan+ I don’t find a clear need for a 2 component dust model. Without trying to replicate their results I can’t say why exactly we disagree, but I think they erred in aggregating the MaNGA data to the lowest spatial resolution of the broad band photometric data they used, which was 5″ radius. There are clear variations in physical conditions on much smaller scales than this.

Second topic: the most widely accepted SFR indicator in visual wavelengths is Hα luminosity. Here is another plot I’ve displayed previously: a simple scatterplot of Hα luminosity density against the 100Myr averaged star formation rate density from the SFH models. Luminosity density is corrected for attenuation estimated from the Balmer decrement and for comparison the light gray points are the uncorrected values. Points are color coded by BPT class determined in the usual manner. The straight line is the Hα – SFR calibration of Moustakas et al. (2006), which in turn is taken from earlier work by Kennicutt.

Model SFR density vs. Hα luminosity density corrected for extinction estimated from Balmer decrement. Light colored points are uncorrected for extinction. Straight line is Hα-SFR calibration from Moustakas et al. (2006)

Keeping in mind that Hα emission tracks star formation on timescales of ∼10 Myr1to the extent that ionization is caused by hot young stars. There are evidently multiple ionizing sources in this system, but disentangling their effects seems hard. Note there’s no clear stratification by BPT class in this plot. this graph strongly supports the scenario I outlined in the last post. At the highest Hα luminosities the SFR-Hα trend nicely straddles the Kennicutt-Moustakas calibration, consistent with the finding that the central regions of the two galaxies have had ∼constant or slowly rising star formation rates in the recent past. At lower Hα luminosities the 100Myr average trends consistently above the calibration line, implying a recent fading of star formation.

The maps below add some detail, and here the perceptual uniformity of the viridis color palette really helps. If star formation exactly tracked Hα luminosity these two maps would look the same. Instead the northern tidal tail in particular and the small piece of the southern one within the IFU footprint are underluminous in Hα, again implying a recent fading of star formation in the peripheries.

(L) Hα luminosity density, corrected for extinction estimated by Balmer decrement. (R) SFR density (100 Myr average).

Final topic: the fit to the data, and in particular the emission lines. As I’ve mentioned previously I fit the stellar contribution and emission lines simultaneously, generally assuming separate single component gaussian velocity dispersions and a common system velocity offset. This works well for most galaxies, but for active galaxies or systems like this one with complex velocity profiles maybe not so much. In particular the northern nuclear region is known to have high velocity outflows in both ionized and neutral gas due presumably to supernova driven winds. I’m just going to look at the central fiber spectrum for now. I haven’t examined the fits in detail, but in general they get better outside the immediate region of the center. First, here is the fit to the data using my standard model. In the top panel the gray line, which mostly can’t be seen, is the observed spectrum. Blue are quantiles of the posterior mean fit — this is actually a ribbon, although its width is too thin to be discernable. The bottom panel are the residuals in standard deviations. Yes, they run as large as ±50σ, with conspicuous problems around all emission lines. There are also a number of usually weak emission lines that I don’t track that are present in this spectrum.

mrk848_fit_central_spec
Fit to central fiber spectrum; model with single gaussian velocity distributions.

I have a solution for cases like this which I call partially parametric. I assume the usual Gauss-Hermite form for the emission lines (as in, for example, ppxf) while the stellar velocity distribution is modeled with a convolution kernel2I think I’ve discussed this previously but I’m too lazy to check right now. If I haven’t I’ll post about it someday. Unfortunately the Stan implementation of this model takes at least an order of magnitude longer to execute than my standard one, which makes its wholesale use prohibitively expensive. It does materially improve the fit to this spectrum although there are still problems with the stronger emission lines. Let’s zoom in on a few crucial regions of the spectrum:

Zoomed in fit to central fiber spectrum using “partially parametric velocity distribution” model. Grey: observed flux. Blue: model.

The two things that are evident here are the clear sign of outflow in the forbidden emission lines, particularly [O III] and [N II], while the Balmer lines are relatively more symmetrical as are the [S II] doublet at 6717, 6730Å. The existence of rather significant residuals is likely because emission is coming from at least two physically distinct regions while the fit to the data is mostly driven by Hα, which as usual is the strongest emission line. The fit captures the emission line cores in the high order Balmer lines rather well and also the absorption lines on the blue side of the 4000Å break except for the region around the [Ne III] line at 3869Å.

I’m mostly interested in star formation histories, and it’s important to see what differences are present. Here is a comparison of three models: my standard one, the two dust component model, and the partially parametric velocity dispersion model:

mrk848_centralsfr3ways
Detailed star formation history models for the northern nucleus using 3 different models.

In fact the differences are small and not clearly outside the range of normal MCMC variability. The two dust model slightly increases the contribution of the youngest stellar component at the expense of slightly older contributors. All three have the presumably artifactual uptick in SFR at 4Gyr and very similar estimated histories for ages >1 Gyr.

I still envision a number of future model refinements. The current version of the official data analysis pipeline tracks several more emission lines than I do at present and has updated wavelengths that may be more accurate than the ones from the SDSS spectro pipeline. It might be useful to allow at least two distinct emission line velocity distributions, with for example one for recombination lines and one for forbidden. Unfortunately the computational expense of this sort of generalization at present is prohibitive.

I’m not impressed with the two dust model that I tried, but there may still be useful refinements to the attenuation model to be made. A more flexible form of the Calzetti relation might be useful for example3there is recent relevant literature on this topic that I’m again too lazy to look up.

My initial impression of this system was that it was a clear false positive that was selected mostly because of a spurious BPT classification. On further reflection with MaNGA data available it’s not so clear. A slight surprise is the strong Balmer absorption virtually throughout the system with evidence for a recent shut down of star formation in the tidal tails. A popular scenario for the formation of K+A galaxies through major mergers is that they experience a centrally concentrated starburst after coalescence which, once the dust clears and assuming that some feedback mechanism shuts off star formation leads to a period of up to a Gyr or so with a classic K+A signature4I previously cited Bekki et al. 2005, who examine this scenario in considerable detail.Capturing a merger in the instant before final coalescence provides important clues about this process.

To the best of my knowledge there have been no attempts at dynamical modeling of this particular system. There is now reasonably good kinematic information for the region covered by the MaNGA IFU, and there is good photometric data from both HST and several imaging surveys. Together these make detailed dynamical modeling technically feasible. It would be interesting if star formation histories could further constrain such models. Being aware of the multiple “degeneracies” between stellar age and other physical properties I’m not highly confident, but it seems provocative that we can apparently identify distinct stages in the evolutionary history of this system.

Revisiting the Baryonic Tully-Fisher relation… – Part 3

This post has been languishing in draft form for well over a month thanks to travel and me losing interest in the subject. I’m going to try to get it out of the way quickly and move on.

Last time I noted the presence of apparent outliers in the relationship between stellar mass and rotation velocity and pointed out that most of them are due to model failures of various sorts rather than “cosmic variance.” That would seem to suggest the need for some sample refinement, and the question then becomes how to trim the sample in a way that’s reproducible.

An obvious comment is that all of the outliers fall below the general trend and (less obviously perhaps) most have very large posterior uncertainties as well. This suggests a simple selection criterion: remove the measurements which have a small ratio of posterior mean to posterior standard deviation of rotation velocity. Using the asymptotic circular velocity v_c in the atan mean function and setting the threshold to 3 standard deviations the sample members that are selected for removal are circled in red below. This is certainly effective at removing outliers but it’s a little too indiscriminate — a number of points that closely follow the trend are selected for removal and in particular 19 out of 52 points with stellar masses less than \(10^{9.5} M_\odot\) are selected. But, let’s look at the results for this trimmed sample.

lgm_logvc_circled_bad_2ways
Posterior distribution of asymptotic velocity `v_c` vs stellar mass. Circled points have posterior mean(v_c)/sd(v_c) < 3.

Again I model the joint relationship between mass and circular velocity using my Stan implementation of Bovy, Hogg, and Roweis’s “Extreme deconvolution” with the simplification of assuming gaussian errors in both variables. The results are shown below for both circular velocity fiducials. Recall from my previous post on this subject the dotted red ellipse is a 95% joint confidence interval for the intrinsic relationship while the outer blue one is a 95% confidence ellipse for repeated measurements. Compared to the first time I performed this exercise the former ellipse is “fatter,” indicating more “cosmic variance” than was inferred from the earlier model. I attribute this to a better and more flexible model. Notice also the confidence region for repeated measurements is tighter than previously, reflecting tighter error bars for model posteriors.

tf_subset1
Joint distribution of stellar mass and velocity by “Extreme deconvolution.” Inner ellipse: 95% joint confidence region for the intrinsic relationship. Outer ellipse: 95% confidence ellipse for new data. Top: Asymptotic circular velocity v_c. Bottom: Circular velocity at 1.5 r_eff.

Now here is something of a surprise: below are the model results for the full sample compared to the trimmed one. The red and yellow ellipses are the estimated intrinsic relations using the full and trimmed samples, while green and blue are for repeated measurements. The estimated intrinsic relationships are nearly identical despite the many outliers. So, even though this model wasn’t formulated to be “robust” as the term is usually understood in statistics in practice it is, at least as regards to the important inferences in this application.

tf_alldr15
Joint distribution of stellar mass and velocity by “Extreme deconvolution” (complete sample). Inner ellipse: 95% joint confidence region for the intrinsic relationship. Outer ellipse: 95% confidence ellipse for new data. Top: Asymptotic circular velocity v_c. Bottom: Circular velocity at 1.5 r_eff.

Finally the slope, that is the exponent in the stellar mass Tully-Fisher relationship \(M^* \sim V_{rot}^\gamma\) is estimated as the (inverse of) slope of the major axis of the inner ellipses in the above plots. The posterior mean and 95% marginal confidence intervals for the two velocity measures and both samples are:

v_c (subset 1) \(4.81^{+0.28}_{-0.25}\)

v_c (all) \(4.81^{+0.28}_{-0.25}\)

v_r (subset 1) \(4.36^{+0.23}_{-0.20}\)

v_r (all) \(4.33^{+0.23}_{-0.21}\)

Does this suggest some tension with the value of 4 determined by McGaugh et al. (2000)? Not necessarily. For one thing this is properly an estimate of the stellar mass – velocity relationship, not the baryonic one. Generally lower stellar mass galaxies will have higher gas fractions than high stellar mass ones, so a proper accounting for that would shift the slope towards lower values. Also, and as can be seen here, both the choice of fiducial velocity and analysis method matter. This has been discussed recently in some detail by Lelli et al. (2019)1These two papers have two authors in common..

Next time, back to star formation history modeling.

Yet more on rotation curve modeling — why the mean function matters

When I first began modeling disk galaxy rotation curves using low order polynomials for the circular velocity I noticed two rather frequent systematics in the model residuals:

  1. Lobe like areas symmetrically located around the nucleus with approximately equal and opposite signs. Sometimes these are co-located with bar ends but a bar is not always obvious.
  2. A contrast of a few 10’s of kilometers/sec between spiral arms and interarm regions. This is rather common in grand design spirals.

Here’s a particularly dramatic example of symmetrical lobes in mangaid 1-339041 (plateifu 8138-12704), IAU name SDSS J074637.70+444725.8. First, here are the measured line of sight velocities for the fiber spectra:

vf_rss_8138-12704
(L) Velocity field measured from stacked RSS file (R) Interpolated velocity field mangaid 1-339041 (plateifu 8138-12704)

The left plot shows the actual measurements from the stacked RSS file. The right is just an interpolated version of the left. Since value added data is now available it’s worth comparing this to output from “Marvin“. For reference here is the Hα velocity map:

Hα velocity map from MaNGA DAP

It’s hard to tell in any detail, but these look similar enough and the stellar velocity field as measured by the DAP also looks similar.

Next, here are the mean residuals from the posterior predictive fits shown as interpolated maps derived from the fits at the observed positions. As promised the left hand map from the low order polynomial fit has prominent lobes situated on either side of the nucleus and a more subtle contrast between spiral arms and interarm regions. The right hand map from the GP model appears to be largely free of systematic patterns. Why the difference?

vf_res_2ests_8138-12704
Mean residuals from models: (L) Polynomial rotation curve model (R) GP model with atan mean function

In this case the arctangent mean function I introduced in the last post worked very well, with the estimated circular velocity rising quickly to an asymptotic value of ∼300km/s. The low order polynomial representation is necessarily constrained by the possible shapes of a low order polynomial (in this case cubic), resulting in a shallower initial slope and a first local maximum farther out than in the GP model. The lobed residuals in the polynomial model are therefore seen to be due to an inner disk that’s rotating more rapidly than can be modeled (and not due to a kinematically distinct component or to streaming material).

rot_curves_2ests_8138-12704
Rotation and “expansion” velocity curves (T) polynomial model (B) GP model with atan mean function

As a brief morphological note, GZ2 classifiers thought this was a normal looking disk galaxy by an overwhelming majority. It’s hard to say they were wrong based on the SDSS imaging, but the deeper and wider legacy survey thumbnail clearly shows the outer disk to be disturbed, presumably by the edge on disk galaxy to the north — the relative velocities are ∼340km/sec, so they are likely in close proximity.

Plateifu 8138-12704 IFU footprint

As time allows I may take a closer look at model diagnostics available in Stan or some more examples. Longer term I plan to take another look at the Baryonic Tully-Fisher relationship for the larger sample available in DR15.

Software officially published and version controlled

It took longer than I had hoped, but I finally have the R and Stan software that I use for star formation history modeling under version control and uploaded to github. There are 3 R packages, two of which are newly available:

    • spmutils: The actual code for maximum likelihood and Bayesian modeling of star formation histories. There are also tools for reading FITS files from SDSS single fiber spectra and from MaNGA, visualization, and analysis.
    • cosmo: A simple, lightweight cosmological distance calculator.
    • sfdmap: A medium resolution version of the all sky dust map of Schlegel, Finkbeiner and Davis (SFD) and a pair of functions for rapid access.

So far only cosmo is documented to minimal R standards. It’s also the most generally useful, at least if you need to calculate cosmological distances. sfdmap has skeleton help files; spmutils not even that. I plan to correct that, but probably not in the next several weeks.

The R packages are pure R code and can be installed directly from github using devtools:

cosmo: devtools::install_github("mlpeck/cosmo")

sfdmap: devtools::install_github("mlpeck/sfdmap")

spmutils: devtools::install_github("mlpeck/spmutils/spmutils")

The Stan files for SFH modeling that I’m currently working with are in spmutils/stan. I keep the Stan files in the directory ~/spmcode on my local machines, and that’s the default location in the functions that call them.

This code mushroomed over a period of several years without much in the way of design, so it’s not much use without some documentation of typical workflow. I will get around to that sometime soon (I hope).