A more flexible prescription for dust attenuation – continued

Here’s a problem that isn’t a huge surprise but that I hadn’t quite anticipated. I initially chose, without a lot of thought, a prior \(\delta \sim \mathcal{N}(0, 0.5)\) for the slope parameter delta in the modified Calzetti relation from the last post. This seemed reasonable given Salim et al.’s result showing that most galaxies have slopes \(-0.1 \lesssim \delta \lesssim +1\) (my wavelength parameterization reverses the sign of \(\delta\) ). At the end of the last post I made the obvious comment that if the modeled optical depth is near 0 the data can’t constrain the shape of the attenuation curve and in that case the best that can be hoped for is the model to return the prior for delta. Unfortunately the actual model behavior was more complicated than that for a spectrum that had a posterior mean tauv near 0:

9491-6101_modcalzetti_loose_pairs
Pairs plot of parameters tauv and delta, plus summed log likelihood with “loose” prior on delta.

Although there weren’t any indications of convergence issues in this run experts in the Stan community often warn that “banana” shaped posteriors like the joint distribution of tauv and delta above are difficult for the HMC algorithm in Stan to explore. I also suspect the distribution is multi-modal and at least one mode was missed, since the fit to the flux data as indicated by the summed log-likelihood is slightly worse than the unmodified Calzetti model.

A value of \(\delta\) this small actually reverses the slope of the attenuation curve, making it larger in the red than in the blue. It also resulted in a stellar mass estimate about 0.17 dex larger than the original model, which is well outside the statistical uncertainty:

9491-6101_stellar_mass_dust3ways
Stellar mass estimates for loose and tight priors on delta + unmodified Calzetti attenuation curve

I next tried a tighter prior on delta of 0.1 for the scale parameter, with the following result:

9491-6101_modcalzetti_tight_pairs
Pairs plot of parameters tauv and delta, plus summed log likelihood with “tight” prior on delta.

Now this is what I hoped to see. The marginal posterior of delta almost exactly returns its prior, properly reflecting the inability of the data to say anything about it. The posterior of tauv looks almost identical to the original model with a very slightly longer tail:

9491-6101_calzetti_orig_pairs
Pairs plot of parameters tauv and summed log likelihood, unmodified Calzetti attenuation curve

So this solves one problem but now I must worry that the prior is too tight in general since Salim’s results predict a considerably larger spread of slopes. As a first tentative test I ran another spectrum from the same spiral galaxy (this is mangaid 1-382712) that had moderately large attenuation in the original model (1.08 ± 0.04) with both “tight” and “loose” priors on delta with the following results:

9491-6101_tauv1_modcalzetti_tight_pairs
Joint posterior of parameters tau and delta with “tight” prior
9491-6101_tauv1_modcalzetti_loose_pairs
Joint posterior of parameters tau and delta with “loose” prior

The distributions of tauv look nearly the same, while delta shrinks very slightly towards 0 with a tight prior, but with almost the same variance. Some shrinkage towards Calzetti’s original curve might be OK. Anyway, on to a larger sample.

A more flexible prescription for dust attenuation (?)

I finally got around to reading a paper by Salim, Boquien, and Lee (2018), who proposed a simple modification to Calzetti‘s well known starburst attenuation relation that they claim accounts for most of the diversity of curves in both star forming and quiescent galaxies. For my purposes their equation 3, which summarizes the relation, can be simplified in two ways. First, for mostly historical reasons optical astronomers typically quantify the effect of dust with a color excess, usually E(B-V). If the absolute attenuation is needed, which is certainly the case in SFH modeling, the ratio of absolute to “selective” attenuation, usually written as \(\mathrm{R_V = A_V/E(B-V)}\) is needed. Parameterizing attenuation by color excess adds an unnecessary complication for my purposes. Salim et al. use a family of curves that differ only in a “slope” parameter \(\delta\) in their notation. Changing \(\delta\) changes \(\mathrm{R_V}\) according to their equation 4. But I have always parametrized dust attenuation by optical depth at V, \(\tau_V\) so that the relation between intrinsic and observed flux is

\(F_o(\lambda) = F_i(\lambda) \mathrm{e}^{-\tau_V k(\lambda)}\)

Note that parameterizing by optical depth rather than total attenuation \(\mathrm{A_V}\) is just a matter of taste since they only differ by a factor 1.08. The wavelength dependent part of the relationship is the same.

The second simplification results from the fact that the UV or 2175Å “bump” in the attenuation curve will never be redshifted into the spectral range of MaNGA data and in any case the subset of the EMILES library I currently use doesn’t extend that far into the UV. That removes the bump amplitude parameter and the second term in Salim et al.’s equation 3, reducing it to the form

\(k_{mod}(\lambda) = k_{Cal}(\lambda)(\frac{\lambda}{5500})^\delta\)

The published expression for \(k(\lambda)\) in Calzetti et al. (2000) is given in two segments with a small discontinuity due to rounding at the transition wavelength of 6300Å. This produces a small unphysical discontinuity when applied to spectra, so I just made a polynomial fit to the Calzetti curve over a longer wavelength range than gets used for modeling MaNGA or SDSS data. Also, I make the wavelength parameter \(y = 5500/\lambda\) instead of using the wavelength in microns as in Calzetti. With these adjustments Calzetti’s relation is, to more digits than necessary:

\(k_{Cal}(y) = -0.10177 + 0.549882y + 1.393039 y^2 – 1.098615 y^3 + 0.260628 y^4\)

and Salim’s modified curve is

\(k_{mod}(\lambda) = k_{Cal}(\lambda)y^\delta\)

Note that δ has the opposite sign as in Salim et al. Here is what the curve looks like over the observer frame wavelength range of MaNGA. A positive value of δ produces a steeper attenuation curve than Calzetti’s, while a negative value is shallower (grayer in common astronomers’ jargon). Salim et al. found typical values to range between about -0.1 and +1.

calzetti_mod_curve
Calzetti attenuation relation with modification proposed by Salim, Boquien, and Lee (2018). Dashed lines show shift in curve for their parameter δ = ± 0.3.

For a first pass attempt at modeling some real data I chose a spectrum from near the northern nucleus of Mrk 848 but outside the region of the highest velocity outflows. This spectrum had large optical depth \(\tau_V \approx 1.52\) and the unusual nature of the galaxy gave reason to think its extinction curve might differ significantly from Calzetti’s.

Encouragingly, the Stan model converged without difficulty and with an acceptable run time. Below are some posterior plots of the two attenuation parameters and a comparison to the optical depth parameter in the unmodified Calzetti dust model. I used a fairly informative prior of Normal(0, 0.5) for the parameter delta. The data actually constrain the value of delta, since its posterior marginal density is centered around -0.06 with a standard deviation of just 0.02. In the pairs plot below we can see there’s some correlation between the posteriors of tauv and delta, but not so much as to be concerning (yet).

mrk848_tauv_delta
(TL) marginal posteriors of optical depth parameter for Calzetti (red) and modified (dark gray) relation. (TR) parameter δ in modified Calzetti relation (BL) pairs plot of `tauv` and `delta` (BR) trace plots of `tauv` and `delta`

Overall the modified Calzetti model favors a slightly grayer attenuation curve with lower absolute attenuation:

mrk848_n_attenuation
Total attenuation for original and modified Calzetti relations. Spectrum was randomly selected near the northern nucleus or Mrk 848.

Here’s a quick look at the effect of the model modification on some key quantities. In the plot below the red symbols are for the unmodified Calzetti attenuation model, and gray or blue the modified one. These show histograms of the marginal posterior density of total stellar mass, 100Myr averaged star formation rate, and specific star formation rate. Because the modified model has lower total attenuation it needs fewer stars, so the lower stellar mass (by ≈ 0.05 dex) is a fairly predictable consequence. The star formation rate is also lower by a similar amount, making the estimates of specific star formation rate nearly identical.

The lower right pane compares model mass growth histories. I don’t have any immediate intuition about how the difference in attenuation models affects the SFH models, but notice that both show recent acceleration in star formation, which was a main theme of my Markarian 848 posts.

Stellar mass, SFR,, SSFR, and mass growth histories for original and modified Calzetti attenuation relation.

So, this first run looks ok. Of course the problem with real data is there’s no way to tell if a model modification actually brings it closer to reality — in this case it did improve the fit to the data a little bit (by about 0.2% in log likelihood) but some improvement is expected just from adding a parameter.

My concern right now is that if the dust attenuation is near 0 the data can’t constrain the value of δ. The best that can happen in this situation is for the model to return the prior. Preliminary investigation of a low attenuation spectrum (per the original model) suggests that in fact a tighter prior on delta is needed than what I originally chose.

Dust attenuation measured two ways

One more post making use of the measurement error model introduced last time and then I think I move on. I estimate the dust attenuation of the starlight in my SFH models using a Calzetti attenuation relation parametrized by the optical depth at V (τV). If good estimates of Hα and Hβ emission line fluxes are obtained we can also make optical depth estimates of emission line regions. Just to quickly review the math we have:

\(A_\lambda = \mathrm{e}^{-\tau_V k(\lambda)}\)

where \(k(\lambda)\) is the attenuation curve normalized to 1 at V (5500Å) and \(A_\lambda\) is the fractional flux attenuation at wavelength λ. Assuming an intrinsic Balmer decrement of 2.86, which is more or less the canonical value for H II regions, the estimated optical depth at V from the observed fluxes is:

\(\tau_V^{bd} = \log(\frac{\mathrm{F}(\mathrm{H}\alpha)/\mathrm{F}(\mathrm{H}\beta)}{ 2.86})/(k(4861)-k(6563))\)

The SFH models return samples from the posteriors of the emission lines, from which are calculated sample values of \(\tau_V^{bd}\).

Here is a plot of the estimated attenuation from the Balmer decrement vs. the SFH model estimates for all spectra from the 28 galaxy sample in the last two posts that have BPT classifications other than no or weak emission. Error bars are ±1 standard deviation.

tauv_bd__tauv
τVbd vs. τVstellar for 962 binned spectra in 28 MaNGA galaxies. Cloud of lines is from fit described in text. Solid and dashed lines are 1:1 and 2:1 relations.

It’s well known that attenuation in emission line regions is larger than that of the surrounding starlight, with a typical reddening ratio of ∼2 (among many references see the review by Calzetti (2001) and Charlot and Fall (2000)). One thing that’s clear in this plot that I haven’t seen explicitly mentioned in the literature is that even in the limit of no attenuation of starlight there typically is some in the emission lines. I ran the regression with measurement error model on this data set, and got the estimated relationship \(\tau_V^{bd} = 0.8 (\pm 0.05) + 1.7 ( \pm 0.09) \tau_V^{stellar}\) with a rather large estimated scatter of ≈ 0.45. So the slope is a little shallower than what’s typically assumed. The non-zero intercept seems to be a robust result, although it’s possible the models are systematically underestimating Hβ emission. I have no other reason to suspect that, though.

The large scatter shouldn’t be too much of a surprise. The shape of the attenuation curve is known to vary between and even within galaxies. Adopting a single canonical value for the Balmer decrement may be an oversimplification too, especially for regions ionized by mechanisms other than hot young stars. My models may be overdue for a more flexible prescription for attenuation.

The statistical assumptions of the measurement error model are a little suspect in this data set as well. The attenuation parameter tauv is constrained to be positive in the models. When it wants to be near 0 the samples from the posterior will pile up near 0 with a tail of larger values, looking more like draws from an exponential or gamma distribution than a gaussian. Here is an example from one galaxy in the sample that happens to have a wide range of mean attenuation estimates:

tauv_example_posteriors
Histograms of samples from the marginal posterior distributions of the parameter tauv for 4 spectra from plateifu 8080-3702.

I like theoretical quantile-quantile plots better than histograms for this type of visualization:

tauv_example_posterior_qqnorm
Normal quantile-quantile plots of samples from the marginal posterior distributions of the parameter tauv for 4 spectra from plateifu 8080-3702.

I haven’t looked at the distributions of emission line ratios in much detail. They might behave strangely in some cases too. But regardless of the validity of the statistical model applied to this data set it’s apparent that there is a fairly strong correlation, which is encouraging.

A simple Bayesian model for linear regression with measurement errors

Here’s a pretty common situation with astronomical data: two or more quantities are measured with nominally known uncertainties that in general will differ between observations. We’d like to explore the relationship among them, if any. After graphing the data and establishing that there is a relationship a first cut quantitative analysis would usually be a linear regression model fit to the data. But the ordinary least squares fit is biased and might be severely misleading if the measurement errors are large enough. I’m going to present a simple measurement error model formulation that’s amenable to Bayesian analysis and that I’ve implemented in Stan. This model is not my invention by the way — in the astronomical literature it dates at least to Kelly (2007), who also explored a number of generalizations. I’m only going to discuss the simplest case of a single predictor or covariate, and I’m also going to assume all conditional distributions are gaussian.

The basic idea of the model is that the real, unknown quantities (statisticians call these latent variables, and so will I) are related through a linear regression. The conditional distribution of the latent dependent variable is

\(y^{lat}_i | x^{lat}_i, \beta_0, \beta_1, \sigma \sim \mathcal{N}(\beta_0 + \beta_1 x^{lat}_i, \sigma)~~~ i = 1, \cdots, N\)

The observed values then are generated from the latent ones with distributions

\(y^{obs}_i| y^{lat}_i \sim \mathcal{N}(y^{lat}_i, \sigma_{y, i})\)

\(x^{obs}_i| x^{lat}_i \sim \mathcal{N}(x^{lat}_i, \sigma_{x, i})~~~ i = 1, \cdots, N\)

where \(\sigma_{x, i}, \sigma_{y, i}\) are the known standard deviations. The full joint distribution is completed by specifying priors for the parameters \(\beta_0, \beta_1, \sigma\). This model is very easy to implement in the Stan language and the complete code is listed below. I’ve also uploaded the code, a script to reproduce the simulated data example discussed below, and the SFR-stellar mass data from the last page in a dropbox folder.

/**
 * Simple regression with measurement error in x and y
*/

data {
  int<lower=0> N;
  vector[N] x;
  vector<lower=0>[N] sd_x;
  vector[N] y;
  vector<lower=0>[N] sd_y;
} 

// standardize data
transformed data {
  real mean_x = mean(x);
  real sd_xt = sd(x);
  real mean_y = mean(y);
  real sd_yt = sd(y);
  
  vector[N] xhat = (x - mean_x)/sd_xt;
  vector<lower=0>[N] sd_xhat = sd_x/sd_xt;
  
  vector[N] yhat = (y - mean_y)/sd_yt;
  vector<lower=0>[N] sd_yhat = sd_y/sd_yt;
  
}


parameters {
  vector[N] x_lat;
  vector[N] y_lat;
  real beta0;
  real beta1;
  real<lower=0> sigma;
}
transformed parameters {
  vector[N] mu_yhat = beta0 + beta1 * x_lat;
}
model {
  x_lat ~ normal(0., 1000.);
  beta0 ~ normal(0., 5.);
  beta1 ~ normal(0., 5.);
  sigma ~ normal(0., 10.);
  
  xhat ~ normal(x_lat, sd_xhat);
  y_lat ~ normal(mu_yhat, sigma);
  yhat ~ normal(y_lat, sd_yhat);
  
} 

generated quantities {
  vector[N] xhat_new;
  vector[N] yhat_new;
  vector[N] y_lat_new;
  vector[N] x_new;
  vector[N] y_new;
  vector[N] mu_x;
  vector[N] mu_y;
  real b0;
  real b1;
  real sigma_unorm;
  
  b0 = mean_y + sd_yt*beta0 - beta1*sd_yt*mean_x/sd_xt;
  b1 = beta1*sd_yt/sd_xt;
  sigma_unorm = sd_yt * sigma;
  
  mu_x = x_lat*sd_xt + mean_x;
  mu_y = mu_yhat*sd_yt + mean_y;
  
  for (n in 1:N) {
    xhat_new[n] = normal_rng(x_lat[n], sd_xhat[n]);
    y_lat_new[n] = normal_rng(beta0 + beta1 * x_lat[n], sigma);
    yhat_new[n] = normal_rng(y_lat_new[n], sd_yhat[n]);
    x_new[n] = sd_xt * xhat_new[n] + mean_x;
    y_new[n] = sd_yt * yhat_new[n] + mean_y;
  }
}

Most of this code should be self explanatory, but there are a few things to note. In the transformed data section I standardize both variables, that is I subtract the means and divide by the standard deviations. The individual observation standard deviations are also scaled. The transformed parameters block is strictly optional in this model. All it does is spell out the linear part of the linear regression model.

In the model block I give a very vague prior for the latent x variables. This has almost no effect on the model output since the posterior distributions of the latent values are strongly constrained by the observed data. The parameters of the regression model are given less vague priors. Since we standardized the data we know beta0 should be centered near 0 and beta1 near 1 and all three should be approximately unit scaled. The rest of the model block just encodes the conditional distributions I wrote out above.

Finally the generated quantities block does two things: generate some some new simulated data values under the model using the sampled parameters, and rescale the parameter values to the original data scale. The first task enables what’s called “posterior predictive checking.” Informally the idea is that if the model is successful simulated data generated under it should look like the data that was input.

It’s always a good idea to try out a model with some simulated data that conforms to it, so here’s a script to generate some data, then print and graph some basic results. This is also in the dropbox folder.

testme <- function(N=200, mu_x=0, b0=10, b1=2, sigma=0.5, seed1=1234, seed2=23455, ...) {
  require(rstan)
  require(ggplot2)
  set.seed(seed1)
  x_lat <- rnorm(N, mean=mu_x, sd=1.)
  sd_x <- runif(N, min=0.1, max=0.2)
  sd_y <- runif(N, min=0.1, max=0.2)
  x_obs <- x_lat+rnorm(N, sd=sd_x)
  y_lat <- b0 + b1*x_lat + rnorm(N, sd=sigma)
  y_obs <- y_lat + rnorm(N, sd=sd_y)
  stan_dat <- list(N=N, x=x_obs, sd_x=sd_x, y=y_obs, sd_y=sd_y)
  df1 <- data.frame(x_lat=x_lat, x=x_obs, sd_x=sd_x, y_lat=y_lat, y=y_obs, sd_y=sd_y)
  sfit <- stan(file="ls_me.stan", data=stan_dat, seed=seed2, chains=4, cores=4, ...)
  print(sfit, pars=c("b0", "b1", "sigma_unorm"), digits=3)
  g1 <- ggplot(df1)+geom_point(aes(x=x, y=y))+geom_errorbar(aes(x=x,ymin=y-sd_y,ymax=y+sd_y)) +
                      geom_errorbarh(aes(y=y, xmin=x-sd_x, xmax=x+sd_x))
  post <- extract(sfit)
  df2 <- data.frame(x_new=as.numeric(post$x_new), y_new=as.numeric(post$y_new))
  g1 <- g1 + stat_ellipse(aes(x=x_new, y=y_new), data=df2, geom="path", type="norm", linetype=2, color="blue") +
            geom_abline(slope=post$b1, intercept=post$b0, alpha=1/100)
  plot(g1)
  list(sfit=sfit, stan_dat=stan_dat, df=df1, graph=g1)
}
  

This should print the following output:

Inference for Stan model: ls_me.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

             mean se_mean    sd  2.5%   25%   50%    75%  97.5% n_eff  Rhat
b0          9.995   0.001 0.042 9.912 9.966 9.994 10.023 10.078  5041 0.999
b1          1.993   0.001 0.040 1.916 1.965 1.993  2.020  2.073  4478 1.000
sigma_unorm 0.472   0.001 0.037 0.402 0.447 0.471  0.496  0.549  2459 1.003

Samples were drawn using NUTS(diag_e) at Thu Jan  9 10:09:34 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
> 

and graph:

bayes_regression_measurment_error_fake
Simulated data and model fit from script in text. Ellipse is 95% confidence interval for new data generated from model.

This all looks pretty good. The true parameters are well within the 95% confidence bounds of the estimates. In the graph the simulated new data is summarized with a 95% confidence ellipse, which encloses just about 95% of the input data, so the posterior predictive check indicates a good model. Stan is quite aggressive at flagging potential convergence failures, and no warnings were generated.

Turning to some real data I also included in the dropbox folder the star formation rate density versus stellar mass density data that I discussed in the last post. This is in something called R “dump” format, which is just an ascii file with R assignment statements for the input data. This isn’t actually a convenient form for input to rstan’s sampler or ggplot2’s plotting commands, so once loaded the data are copied into a list and a data frame. The interactive session for analyzing the data was:

 source("sf_mstar_sfr.txt")
> sf_dat <- list(N=N, x=x, sd_x=sd_x, y=y, sd_y=sd_y)
> df <- data.frame(x=x,sd_x=sd_x,y=y,sd_y=sd_y)
> stan_sf <- stan(file="ls_me.stan",data=sf_dat, chains=4,seed=12345L)

 post <- extract(stan_sf)
> odr <- pracma::odregress(df$x,df$y)
> ggplot(df, aes(x=x,y=y))+geom_point()+geom_errorbar(aes(x=x,ymin=y-sd_y,ymax=y+sd_y))+geom_errorbarh(aes(y=y,xmin=x-sd_x,xmax=x+sd_x))+geom_abline(slope=post$b1,intercept=post$b0,alpha=1/100)+stat_ellipse(aes(x=x_new,y=y_new),data=data.frame(x_new=as.numeric(post$x_new),y_new=as.numeric(post$y_new)),geom="path",type="norm",linetype=2, color="blue")+geom_abline(slope=odr$coef[1],intercept=odr$coef[2],color='red',linetype=2)
> print(stan_sf, pars=c("b0","b1","sigma_unorm"))

 
bayes_regression_measurment_error_sfms
Star formation rate vs. Stellar mass for star forming regions. Data from previous post. Semi-transparent lines – model fits to the regression line as described in text. Dashed red line – “orthogonal distance regression” fit.
Inference for Stan model: ls_me.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

              mean se_mean   sd   2.5%    25%    50%    75%  97.5% n_eff Rhat
b0          -11.20       0 0.26 -11.73 -11.37 -11.19 -11.02 -10.67  7429    1
b1            1.18       0 0.03   1.12   1.16   1.18   1.20   1.24  7453    1
sigma_unorm   0.27       0 0.01   0.25   0.27   0.27   0.28   0.29  7385    1

Samples were drawn using NUTS(diag_e) at Thu Jan  9 10:49:42 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Once again the model seems to fit the data pretty well, the posterior predictive check is reasonable, and Stan didn’t complain. Astronomers are often interested in the “cosmic variance” of various quantities, that is the true amount of unaccounted for variation. If the error estimates in this data set are reasonable the cosmic variance in SFR density is estimated by the parameter σ. The mean estimate of around 0.3 dex is consistent with other estimates in the literature1see the compilation in the paper by Speagle et al. that I cited last time, for example.

I noticed in preparing the last post that several authors have used something called “orthogonal distance regression” (ODR) to estimate the star forming main sequence relationship. I don’t know much about the technique besides that it’s an errors in variables regression method. There is an R implementation in the package pracma. The red dashed line in the plot above is the estimate for this dataset. The estimated slope (1.41) is much steeper than the range of estimates from this model. On casual visual inspection though it’s not clearly worse at capturing the mean relationship.

A few properties of my transitional galaxy candidate sample

It took a few months but I did manage to analyze 28 of the 29 galaxies in the sample I introduced last time. One member — mangaid 1-604907 — hosts a broad line AGN and has broad emission lines throughout. That’s not favorable for my modeling methods, so I left it out. It took a while to develop a more or less standardized analysis protocol, so there may be some variation in S/N cuts in binning the spectra and in details of model runs in Stan. Most runs used 250 warmup and 750 total iterations for each of 4 chains run in parallel, with some adaptation parameters changed from their default values1I set target acceptance probability adapt_delta to 0.925 or 0.95 and the maximum treedepth for the No U-Turn Sampler max_treedepth to 11-12. A total post-warmup sample size of 2000 is enough for the inferences I want to make. One of the major advantages of the NUTS sampler is that once it converges it tends to produce draws from the posterior with very low autocorrelation, so effective sample sizes tend to be close to the number of samples.

I’m just going to look at a few measured properties of the sample in this post. In future ones I may look in more detail at some individual galaxies or the sample as a whole. Without a control sample it’s hard to say if this one is significantly different from a randomly chosen sample of galaxies, and I’m not going to try. In the plots shown below each point represents measurements on a single binned spectrum. The number of binned spectra per galaxy ranged from 15 to 153 with a median of 51.5, so a relatively small number of galaxies contribute disproportionately to these plots.

One of the more important empirical results in extragalactic astrophysics is the existence of a fairly well defined and approximately linear relationship between stellar mass and star formation rate for star forming galaxies, which has come to be known as the “star forming main sequence.” Thanks to CALIFA and MaNGA it’s been established in recent years that the SFMS extends to subgalactic scales as well, at least down to the ∼kpc resolution of these surveys. This first plot is of the star formation rate surface density vs. stellar mass surface density, where recall my estimate of SFR is for a time scale of 100 Myr. Units are \(\mathrm{M_\odot /yr/kpc^2} \) and \(\mathrm{M_\odot /kpc^2} \), logarithmically scaled. These estimates are uncorrected for inclination and are color coded by BPT class using Kauffmann’s classification scheme for [N II] 6584, with two additional classes for spectra with weak or no emission lines.

If we take spectra with star forming line ratios as comprising the SFMS there is a fairly tight relation: the cloud of lines are estimates from a Bayesian simple linear regression with measurement error model fit to the points with star forming BPT classification only (N = 428). The modeled relationship is \(\Sigma_{sfr} = -11.2 (\pm 0.5) + 1.18 (\pm 0.06)~ \Sigma_{M^*}\) (95% marginal confidence limits), with a scatter around the mean relation of ≈ 0.27 dex. The slope here is rather steeper than most estimates2For example in a large compilation by Speagle et al. (2014) none of the estimates exceeded a slope of 1., but perhaps coincidentally is very close to an estimate for a small sample of MaNGA starforming galaxies in Lin et al. (2019). I don’t assign any particular significance to this result. The slope of the SFMS is highly sensitive to the fitting method used, the SFR and stellar mass calibrators, and selection effects. Also, the slope and intercept estimates are highly correlated for both Bayesian and frequentist fitting methods.

One notable feature of this plot is the rather clear stratification by BPT class, with regions having AGN/LINER line ratios and weak emission line regions offset downwards by ~1 dex. Interestingly, regions with “composite” line ratios straddle both sides of the main sequence, with some of the largest outliers on the high side. This is mostly due to the presence of Markarian 848 in the sample, which we saw in recent posts has composite line ratios in most of the area of the IFU footprint and high star formation rates near the northern nucleus (with even more hidden by dust).

sigma_sfrXsigma_mstar
Σsfr vs. ΣM*. Cloud of straight lines is an estimate of the star-forming main sequence relation based on spectra with star-forming line ratios. Sample is all analyzed spectra from the set of “transitional” candidates of the previous post.

Another notable relationship that I’ve shown previously for a few individual galaxies is between the star formation rate estimated from the SFH models and Hα luminosity, which is the main SFR calibrator in optical spectra. In the left hand plot below Hα is corrected for the estimated attenuation for the stellar component in the SFH models. The straight line is the SFR-Hα calibration of Moustakas et al. (2006), which can be traced back to early ’90s work by Kennicutt.

Most of the sample does follow a linear relationship between SFR density and Hα luminosity density with an offset from the Kennicutt-Moustakas calibration, but there appears to be a departure from linearity at the low SFR end in the sense that the 100 Myr averaged SFR exceeds the amount predicted by Hα (which recall traces star formation on 5-10 Myr scales). This might be interpreted as indicating that the sample includes a significant number of regions that have been very recently quenched (that is within the past 10-100 Myr). There are other possible interpretation though, including biased estimates of Hα luminosity when emission lines are weak.

In the right hand panel below I plot the same relationship but with Hα corrected for attenuation using the Balmer decrement for spectra with firm detections in the four lines that go into the [N II]/Hα vs. [O III]/Hβ BPT classification, and therefore have firm detections in Hβ. The sample now nicely straddles the calibration line over the ∼ 4 orders of magnitude of SFR density estimates. So, the attenuation in the regions where emission lines arise is systematically higher than the estimated attenuation of stellar light. This is a well known result. What’s encouraging is it implies my model attenuation estimates actually contain useful information.

sigma_sfrXsigma_logl_ha
(L) Estimated Σsfr vs. Σlog L(Hα) corrected for attenuation using stellar attenuation estimate. (R) same but Hα luminosity corrected using Balmer decrement. Spectra with detected Hβ emission only.

One final relation: some measure of the 4000Å break strength has been used as a calibrator of specific star formation rate since at least Brinchmann et al. (2004). Below is my version using the “narrow” definition of D4000. I haven’t attempted a quantitative comparison with any other work, but clearly there’s a well defined relationship. Maybe worth noting is that “red and dead” ETGs typically have \(\mathrm{D_n(4000)} \gtrsim 1.75\) (see my previous post for example). Very few of the spectra in this sample fall in that region, and most are low S/N spectra in the outskirts of a few of the galaxies.

ssfrXd4000
Specific star formation rate vs. Dn4000

Two obvious false positives in this sample were a pair of grand design spirals (mangaids 1-23746 and 1.382712) with H II regions sprinkled along the full length of their arms. To see why they were selected and verify that they’re in fact false positives here are BPT maps:

8611-12702_bptmap
Map of BPT classification — mangaid 1-23746 (plateifu 8611-12702)
9491-6101_bptmap
Map of BPT classification — mangaid 1-382712 (plateifu 9491-6101)

These are perfect illustrations of the perils of using single fiber spectra for sample selection when global galaxy properties are of interest. The central regions of both galaxies have “composite” spectra, which might actually indicate that the emission is from a combination of AGN and star forming regions, but outside the nuclear regions star forming line ratios prevail throughout.

These two galaxies contribute about 45% of the binned spectra with star forming line ratios, so the SFMS would be much more sparsely populated without their contribution. Only one other galaxy (mangaid 1-523050) is similarly dominated by SF regions and it has significantly disturbed morphology.

I may return to this sample or individual members in the future. Probably my next posts will be about Bayesian modelling though.

Markarian 848 – Closing topics

I’m going to close out my analysis of Mrk 848 for now with three topics. First, dust. Like most SED fitting codes mine produces an estimate of the internal attenuation, which I parameterize with τV, the optical depth at V assuming a conventional Calzetti attenuation curve. Before getting into a discussion for context here is a map of the posterior mean estimate for the higher S/N target binning of the data. For reference isophotes of the synthesized r band surface brightness taken from the MaNGA data cube are superimposed:

mrk848_tauv_map
Map of posterior mean of τV from single component dust model fits with Calzetti attenuation

This compares reasonably well with my visual impression of the dust distribution. Both nuclei have very large dust optical depths with a gradual decline outside, while the northern tidal tail has relatively little attenuation.

The paper by Yuan et al. that I looked at last time devoted most of its space to different ways of modeling dust attenuation, ultimately concluding that a two component dust model of the sort advocated by Charlot and Fall (2000) was needed to bring results of full spectral fitting using ppxf on the same MaNGA data as I’ve examined into reasonable agreement with broad band UV-IR SED fits.

There’s certainly some evidence in support of this. Here is a plot I’ve shown for some other systems of the estimated optical depth of the Balmer emission line emitting regions based on the observed vs. theoretical Balmer decrement (I’ve assumed an intrinsic Hα/Hβ ratio of 2.86 and a Calzetti attenuation relation) plotted against the optical depth estimated from the SFH models, which roughly estimates the amount of reddening needed to fit the SSP model spectra to the observed continuum. In some respects this is a more favorable system than some I’ve looked at because Hβ emission is at measurable levels throughout. On the other hand there is clear evidence that multiple ionization mechanisms are at work, so the assumption of a single canonical value of Hα/Hβ is likely too simple. This might be a partial cause of the scatter in the modeled relationship, but it’s encouraging that there is a strong positive correlation (for those who care, the correlation coefficient between the mean values is 0.8).

The solid line in the graph below is 1:1. The semi-transparent cloud of lines are the sampled relationships from a Bayesian errors in variables regression model. The mean (and marginal 1σ uncertainty) is \(\tau_{V, bd} = (0.94\pm 0.11) + (1.21 \pm 0.12) \tau_V\). So the estimated relationship is just a little steeper than 1:1 but with an offset of about 1, which is a little different from the Charlot & Fall model and from what Yuan et al. found, where the youngest stellar component has about 2-3 times the dust attenuation as the older stellar population. I’ve seen a similar not so steep relationship in every system I’ve looked at and don’t know why it differs from what is typically assumed. I may look into it some day.

τV estimated from Balmer decrement vs. τV from model fits. Straight line is 1:1 relation. Cloud of lines are from Bayesian errors in variables regression model.

I did have time to run some 2 dust component SFH models. This is a very simple extension of the single component models: a single optical depth is applied to all SSP spectra. A second component with the optical depth fixed at 1 larger than the bulk value is applied only to the youngest model spectra, which recall were taken from unevolved SSPs from the updated BC03 library. I’m just going to show the most basic results from the models for now in the form of maps of the SFR density and specific star formation rate. Compared to the same maps displayed at the end of the last post there is very little difference in spatial variation of these quantities. The main effect of adding more reddened young populations to the model is to replace some of the older light — this is the well known dust-age degeneracy. The average effect was to increase the stellar mass density (by ≈ 0.05 dex overall) while slightly decreasing the 100Myr average SFR (by ≈ 0.04 dex), leading to an average decrease in specific star formation rate of ≈ 0.09 dex. While there are some spatial variations in all of these quantities no qualitative conclusion changes very much.

mrk848_sigma_sfr_sfr_2dust_maps
Results from fits with 2 component dust models. (L) SFR density. (R) Specific SFR

Contrary to Yuan+ I don’t find a clear need for a 2 component dust model. Without trying to replicate their results I can’t say why exactly we disagree, but I think they erred in aggregating the MaNGA data to the lowest spatial resolution of the broad band photometric data they used, which was 5″ radius. There are clear variations in physical conditions on much smaller scales than this.

Second topic: the most widely accepted SFR indicator in visual wavelengths is Hα luminosity. Here is another plot I’ve displayed previously: a simple scatterplot of Hα luminosity density against the 100Myr averaged star formation rate density from the SFH models. Luminosity density is corrected for attenuation estimated from the Balmer decrement and for comparison the light gray points are the uncorrected values. Points are color coded by BPT class determined in the usual manner. The straight line is the Hα – SFR calibration of Moustakas et al. (2006), which in turn is taken from earlier work by Kennicutt.

Model SFR density vs. Hα luminosity density corrected for extinction estimated from Balmer decrement. Light colored points are uncorrected for extinction. Straight line is Hα-SFR calibration from Moustakas et al. (2006)

Keeping in mind that Hα emission tracks star formation on timescales of ∼10 Myr1to the extent that ionization is caused by hot young stars. There are evidently multiple ionizing sources in this system, but disentangling their effects seems hard. Note there’s no clear stratification by BPT class in this plot. this graph strongly supports the scenario I outlined in the last post. At the highest Hα luminosities the SFR-Hα trend nicely straddles the Kennicutt-Moustakas calibration, consistent with the finding that the central regions of the two galaxies have had ∼constant or slowly rising star formation rates in the recent past. At lower Hα luminosities the 100Myr average trends consistently above the calibration line, implying a recent fading of star formation.

The maps below add some detail, and here the perceptual uniformity of the viridis color palette really helps. If star formation exactly tracked Hα luminosity these two maps would look the same. Instead the northern tidal tail in particular and the small piece of the southern one within the IFU footprint are underluminous in Hα, again implying a recent fading of star formation in the peripheries.

(L) Hα luminosity density, corrected for extinction estimated by Balmer decrement. (R) SFR density (100 Myr average).

Final topic: the fit to the data, and in particular the emission lines. As I’ve mentioned previously I fit the stellar contribution and emission lines simultaneously, generally assuming separate single component gaussian velocity dispersions and a common system velocity offset. This works well for most galaxies, but for active galaxies or systems like this one with complex velocity profiles maybe not so much. In particular the northern nuclear region is known to have high velocity outflows in both ionized and neutral gas due presumably to supernova driven winds. I’m just going to look at the central fiber spectrum for now. I haven’t examined the fits in detail, but in general they get better outside the immediate region of the center. First, here is the fit to the data using my standard model. In the top panel the gray line, which mostly can’t be seen, is the observed spectrum. Blue are quantiles of the posterior mean fit — this is actually a ribbon, although its width is too thin to be discernable. The bottom panel are the residuals in standard deviations. Yes, they run as large as ±50σ, with conspicuous problems around all emission lines. There are also a number of usually weak emission lines that I don’t track that are present in this spectrum.

mrk848_fit_central_spec
Fit to central fiber spectrum; model with single gaussian velocity distributions.

I have a solution for cases like this which I call partially parametric. I assume the usual Gauss-Hermite form for the emission lines (as in, for example, ppxf) while the stellar velocity distribution is modeled with a convolution kernel2I think I’ve discussed this previously but I’m too lazy to check right now. If I haven’t I’ll post about it someday. Unfortunately the Stan implementation of this model takes at least an order of magnitude longer to execute than my standard one, which makes its wholesale use prohibitively expensive. It does materially improve the fit to this spectrum although there are still problems with the stronger emission lines. Let’s zoom in on a few crucial regions of the spectrum:

Zoomed in fit to central fiber spectrum using “partially parametric velocity distribution” model. Grey: observed flux. Blue: model.

The two things that are evident here are the clear sign of outflow in the forbidden emission lines, particularly [O III] and [N II], while the Balmer lines are relatively more symmetrical as are the [S II] doublet at 6717, 6730Å. The existence of rather significant residuals is likely because emission is coming from at least two physically distinct regions while the fit to the data is mostly driven by Hα, which as usual is the strongest emission line. The fit captures the emission line cores in the high order Balmer lines rather well and also the absorption lines on the blue side of the 4000Å break except for the region around the [Ne III] line at 3869Å.

I’m mostly interested in star formation histories, and it’s important to see what differences are present. Here is a comparison of three models: my standard one, the two dust component model, and the partially parametric velocity dispersion model:

mrk848_centralsfr3ways
Detailed star formation history models for the northern nucleus using 3 different models.

In fact the differences are small and not clearly outside the range of normal MCMC variability. The two dust model slightly increases the contribution of the youngest stellar component at the expense of slightly older contributors. All three have the presumably artifactual uptick in SFR at 4Gyr and very similar estimated histories for ages >1 Gyr.

I still envision a number of future model refinements. The current version of the official data analysis pipeline tracks several more emission lines than I do at present and has updated wavelengths that may be more accurate than the ones from the SDSS spectro pipeline. It might be useful to allow at least two distinct emission line velocity distributions, with for example one for recombination lines and one for forbidden. Unfortunately the computational expense of this sort of generalization at present is prohibitive.

I’m not impressed with the two dust model that I tried, but there may still be useful refinements to the attenuation model to be made. A more flexible form of the Calzetti relation might be useful for example3there is recent relevant literature on this topic that I’m again too lazy to look up.

My initial impression of this system was that it was a clear false positive that was selected mostly because of a spurious BPT classification. On further reflection with MaNGA data available it’s not so clear. A slight surprise is the strong Balmer absorption virtually throughout the system with evidence for a recent shut down of star formation in the tidal tails. A popular scenario for the formation of K+A galaxies through major mergers is that they experience a centrally concentrated starburst after coalescence which, once the dust clears and assuming that some feedback mechanism shuts off star formation leads to a period of up to a Gyr or so with a classic K+A signature4I previously cited Bekki et al. 2005, who examine this scenario in considerable detail.Capturing a merger in the instant before final coalescence provides important clues about this process.

To the best of my knowledge there have been no attempts at dynamical modeling of this particular system. There is now reasonably good kinematic information for the region covered by the MaNGA IFU, and there is good photometric data from both HST and several imaging surveys. Together these make detailed dynamical modeling technically feasible. It would be interesting if star formation histories could further constrain such models. Being aware of the multiple “degeneracies” between stellar age and other physical properties I’m not highly confident, but it seems provocative that we can apparently identify distinct stages in the evolutionary history of this system.

Markarian 848 – detailed star formation histories

In this post I’m going to take a quick look at detailed, spatially resolved star formation histories for this galaxy pair and briefly compare to some of the recent literature. Before I start though, here is a reprocessed version of the blog’s cover picture with a different crop and some Photoshop curve and level adjustments to brighten the tidal tails a bit. Also I cleaned up some more of the cosmic ray hits.

mrk_848_hst_crop_square
Markarian 848 – full resolution crop with level curve adjustment HST ACS/WFC F435W/F814W/F160 false color image

The SFH models I’m going to discuss were based on MaNGA data binned to a more conservative signal to noise target than in the last post. I set a target S/N of 25 — Cappellari’s Voronoi binning algorithm has a hard coded acceptance threshold of 80% of the target S/N, which results in all but a few bins having an actual S/N of at least 20. This produced a binned data set with 63 bins. The map below plots the modeled (posterior mean log) stellar mass density, with clear local peaks in the bins covering the positions of the two nuclei. The bins are numbered in order of increasing distance of the weighted centroids from the IFU center, which corresponds with the position of the northern nucleus. The center of bin 1 is slightly offset from the northern nucleus by about 2/3″. For reference the angular scale at the system redshift (z ≈ 0.040) is 0.8 kpc/” and the area covered by each fiber is 2 kpc2.

Although there’s nothing in the Voronoi binning algorithm that guarantees this it did a nice job of partitioning the data into physically distinct regions, with the two nuclei, the area around the northern nucleus, and the bridge between them all binned to single fiber spectra. The tidal tails are sliced into several pieces while the very low surface brightness regions to the NE and SW get a few bins.

mrk848_stmass_map
Stellar mass density and bin numbers ordered in increasing distance from northern nucleus

I modeled star formation histories with my largest subset of the EMILES library consisting of SSP model spectra for 54 time bins and 4 metallicity bins. As in previous exercises I fit emission lines and the stellar contribution simultaneously, which is riskier than usual in this system because some regions, especially near the northern nucleus, have complex velocity structures producing emission line profiles that are far from single component gaussians. I’ll look at this in more detail in a future post, but for now let’s just take these SFH models at face value and see where they lead. As I’ve done in several previous posts what’s plotted below are star formation rates in solar masses/year over cosmic time, counting backwards from “now” (now being when the light left the galaxies about 500Myr ago). This time I’ve used log scaling for both axes and for the first time in these posts I’ve displayed results for every bin ordered by increasing distance from the IFU center. The first two rows cover the central ~few kpc region surrounding the northern nucleus. The southern nucleus covered by bin 44 is in row 8, second from left. Both the time and SFR scales are the same for all plots. The black lines are median posterior marginal estimates and the ribbons are 95% posterior confidence intervals.

mrk848_sfh_bin20
Star formation histories for the Voronoi binned regions shown in the previous map. Numbers at the top correspond to the bin numbers in the map.

Browsing through these, all regions show ∼constant or gradually decreasing star formation up to about 1 Gyr ago1You may recall I’ve previously noted there is always an uptick in star formation rate at 4 Gyr in my EMILES based models, and that’s seen in all of these as well. This must be spurious, but I still don’t know the cause.. This of course is completely normal for spiral galaxies.

Most regions covered by the IFU began to show accelerated and in some areas bursts of star formation at ∼1 Gyr. In more than half of the bins the maximum star formation rate occurred around 40-60 Myr ago, with a decline or in some areas cessation of star formation more recently. In the central few kpc around the northern nucleus on the other hand star formation accelerated rapidly beginning ~100 Myr ago and continues at high levels. The southern nucleus has a rather different estimated recent star formation history, with no (visible) starburst and instead gradually increasing SFR to a recent maximum. Ongoing star formation at measurable levels is much more localized in the southern central regions and weaker by factors of several than the northern companion.

Here’s a map that I thought would be too noisy to be informative, but turns out to be rather interesting. This shows the spatial distribution of the lookback time to the epoch of maximum star formation rate estimated by the marginal posterior mean. The units displayed are log(age) in Myr; recall that I added unevolved SSP models from the 2013 update of the BC03 models to the BaSTI based SSP library, assigning them an age of 10Myr, so a value of 1 here basically means ≈ now.

mrk848_lbt_to_maxsfr
Look back time to epoch of maximum star formation rate as estimated by marginal posterior mean. Units are log(age in Myr).

To summarize, there have been three phases in the star formation history of this system: a long period of normal disk galaxy evolution; next beginning about 1 Gyr ago widespread acceleration of the SFR with localized bursts; and now, within the past 50-100 Myr a rapid increase in star formation that’s centrally concentrated in the two nuclei (but mostly the northern one) while the peripheral regions have had suppressed activity.

I previously reviewed some of the recent literature on numerical studies of galaxy mergers. The high resolution simulations of Hopkins et al. (2013) and the movies available online at http://www.tapir.caltech.edu/~phopkins/Site/Movies_sbw_mgr.html seem especially relevant, particularly their Milky Way and Sbc analogs. They predict a general but clumpy enhancement of star formation at around the time of first pericenter passage; a stronger, centrally concentrated starburst at the time of second pericenter passage, with a shorter period of separation before coalescence. Surprisingly perhaps, the merger timescales for both their MW and Sbc analogs are very similar to my SFH models, with ∼ 1 Gyr between first and second perigalactic passage and another few 10’s of Myr to final coalescence.

I’m going to wrap up with maps of the (posterior mean estimates) of star formation rate density (as \(\log10(\mathsf{M_\odot/yr/kpc^2})\)) and specific star formation rate, which has units log10(yr-1). These are 100Myr averages and recall are based solely on the SSP template contributions.

mrk848_sfr_ssfr
(L) Star formation rate density (R) Specific star formation rate

Several recent papers have made quantitative estimates of star formation rates in this system, which I’ll briefly review. Detailed comparisons are somewhat difficult because both the timescales probed by different SFR calibrators differ and the spatial extent considered in the studies varies, so I’ll just summarize reported results and compare as best as I can.

Yuan et al. (2018) modeled broad band UV-IR SED’s taking data from GALEX, SDSS, and Spitzer; and also did full spectrum fitting using ppxf on the same MaNGA data I’ve been examining. They divided the data into 5″ (≈ 4 kpc) radius regions covering the tidal tails and each nucleus. The broadband data were fit with parametric SFH models with a pair of exponential bursts (one fixed at 13Gyr, the other allowed to vary). From the parametric models they estimated the SFR in the northern and southern nuclei as 76 and 11 \(\mathsf{M_\odot/yr}\) (the exact interpretation of this value is unclear to me). For comparison I get values of 33 and 13 \(\mathsf{M_\odot/yr}\) for regions within 4 kpc of the two nuclei by calculating the average (posterior mean) star forming density and multiplying by π*42. They also calculated 100Myr average SFRs from full spectrum fitting with several different dust models and also from the Hα luminosity, deriving estimates that varied by an order of magnitude or more. Qualitatively we reach similar conclusions that the tails had earlier starbursts and are now forming stars at lower rates than their peak, and also that the northern nucleus has recent star formation a factor of several higher than the southern.

Cluver et al. (2017) were primarily trying to derive SFR calibrations for WISE W3/W4 filters using samples of normal star forming and LIRG/ULIRGs (including this system) with Spitzer and Herschel observations. Oddly, although they tabulate IR luminosities for the entire sample they don’t tabulate SFR estimates. But plugging into their equations 5 and 7 I get star formation rate estimates of just about 100 \(\mathsf{M_\odot/yr}\). These are global values for the entire system. For comparison I get a summed SFR for my models of ≈ 45 \(\mathsf{M_\odot/yr}\) (after making a 0.2 dex adjustment for fiber overlap).

Tsai and Hwang (2015) also used IR data from Spitzer and conveniently present results for quantities I track using the same units. Their estimates of the (log) stellar mass density, SFR density, and specific star formation rate in the central kpc of (presumably) the northern nucleus were 9.85±0.09, 0.43±0.00, and -9.39±0.09 respectively. For comparison in the fiber that covers the northern nucleus my models return 9.41±0.02, 0.57±0.04, and -8.84±0.04. For the 6 fibers closest to the nucleus the average stellar mass density drops to 9.17 and SFR density to about 0.39. So, our SFR estimates are surprisingly close while my mass density estimate is as much as 0.7 dex lower.

Finally, Vega et al. (2008) performed starburst model fits to broadband IR-radio data. They estimated a burst age of about 60Myr for this system, with average SFR over the burst of 225 \(\mathsf{M_\odot/yr}\) and current (last 10 Myr) SFR 87 \(\mathsf{M_\odot/yr}\) in a starforming region of radius 0.27kpc. Their model has optical depth of 33 at 1 μm, which would make their putative starburst completely invisible at optical wavelengths. Their calculations assumed a Salpeter IMF, which would add ≈0.2 dex to stellar masses and star formation rates compared to the Kroupa IMF used in my models.

Overall I find it encouraging that my model SFR estimates are no worse than factors of several lower than what are obtained from IR data — if the Vega+ estimate of the dust optical depth is correct most of the star formation is well hidden. Next time I plan to look at Hα emission and tie up other loose ends. If time permits before I have to drop this for a while I may look at two component dust models.

Revisiting the Baryonic Tully-Fisher relation… – Part 3

This post has been languishing in draft form for well over a month thanks to travel and me losing interest in the subject. I’m going to try to get it out of the way quickly and move on.

Last time I noted the presence of apparent outliers in the relationship between stellar mass and rotation velocity and pointed out that most of them are due to model failures of various sorts rather than “cosmic variance.” That would seem to suggest the need for some sample refinement, and the question then becomes how to trim the sample in a way that’s reproducible.

An obvious comment is that all of the outliers fall below the general trend and (less obviously perhaps) most have very large posterior uncertainties as well. This suggests a simple selection criterion: remove the measurements which have a small ratio of posterior mean to posterior standard deviation of rotation velocity. Using the asymptotic circular velocity v_c in the atan mean function and setting the threshold to 3 standard deviations the sample members that are selected for removal are circled in red below. This is certainly effective at removing outliers but it’s a little too indiscriminate — a number of points that closely follow the trend are selected for removal and in particular 19 out of 52 points with stellar masses less than \(10^{9.5} M_\odot\) are selected. But, let’s look at the results for this trimmed sample.

lgm_logvc_circled_bad_2ways
Posterior distribution of asymptotic velocity `v_c` vs stellar mass. Circled points have posterior mean(v_c)/sd(v_c) < 3.

Again I model the joint relationship between mass and circular velocity using my Stan implementation of Bovy, Hogg, and Roweis’s “Extreme deconvolution” with the simplification of assuming gaussian errors in both variables. The results are shown below for both circular velocity fiducials. Recall from my previous post on this subject the dotted red ellipse is a 95% joint confidence interval for the intrinsic relationship while the outer blue one is a 95% confidence ellipse for repeated measurements. Compared to the first time I performed this exercise the former ellipse is “fatter,” indicating more “cosmic variance” than was inferred from the earlier model. I attribute this to a better and more flexible model. Notice also the confidence region for repeated measurements is tighter than previously, reflecting tighter error bars for model posteriors.

tf_subset1
Joint distribution of stellar mass and velocity by “Extreme deconvolution.” Inner ellipse: 95% joint confidence region for the intrinsic relationship. Outer ellipse: 95% confidence ellipse for new data. Top: Asymptotic circular velocity v_c. Bottom: Circular velocity at 1.5 r_eff.

Now here is something of a surprise: below are the model results for the full sample compared to the trimmed one. The red and yellow ellipses are the estimated intrinsic relations using the full and trimmed samples, while green and blue are for repeated measurements. The estimated intrinsic relationships are nearly identical despite the many outliers. So, even though this model wasn’t formulated to be “robust” as the term is usually understood in statistics in practice it is, at least as regards to the important inferences in this application.

tf_alldr15
Joint distribution of stellar mass and velocity by “Extreme deconvolution” (complete sample). Inner ellipse: 95% joint confidence region for the intrinsic relationship. Outer ellipse: 95% confidence ellipse for new data. Top: Asymptotic circular velocity v_c. Bottom: Circular velocity at 1.5 r_eff.

Finally the slope, that is the exponent in the stellar mass Tully-Fisher relationship \(M^* \sim V_{rot}^\gamma\) is estimated as the (inverse of) slope of the major axis of the inner ellipses in the above plots. The posterior mean and 95% marginal confidence intervals for the two velocity measures and both samples are:

v_c (subset 1) \(4.81^{+0.28}_{-0.25}\)

v_c (all) \(4.81^{+0.28}_{-0.25}\)

v_r (subset 1) \(4.36^{+0.23}_{-0.20}\)

v_r (all) \(4.33^{+0.23}_{-0.21}\)

Does this suggest some tension with the value of 4 determined by McGaugh et al. (2000)? Not necessarily. For one thing this is properly an estimate of the stellar mass – velocity relationship, not the baryonic one. Generally lower stellar mass galaxies will have higher gas fractions than high stellar mass ones, so a proper accounting for that would shift the slope towards lower values. Also, and as can be seen here, both the choice of fiducial velocity and analysis method matter. This has been discussed recently in some detail by Lelli et al. (2019)1These two papers have two authors in common..

Next time, back to star formation history modeling.

Revisiting the Baryonic Tully-Fisher relation… – Part 2

Last time I left off with the remark that while most of the sample of disk galaxies clearly exhibits a tight relationship between circular velocity and stellar mass, there are some apparent outliers as well. While some “cosmic variance” is expected most of the apparent outliers are due to model failures, which have several possible causes:

  1. Violation of the physical assumptions of the model, namely that the stars and gas are rotating (together) in the plane of a thin disk that’s moderately inclined to our line of sight (see my original post on this topic).
  2. Errors in the photometry. I use two photometric quantities (specifically nsa_elpetro_ba and nsa_elpetro_phi) from the MaNGA DRPALL catalog to set priors for the kinematic parameters cos_i and phi(cosine of the disk inclination and angle to receding side) and also to initialize the sampler. Since proper and in practice fairly informative priors are required for these parameters errors here will propagate into the models, sometimes in ways that are fatal. I’ll look in more detail at some examples below.
  3. Bad velocity data.
  4. Not enough data.
  5. Sampler convergence failures with no obvious cause.

The first two bullet points are closely related: most of the failures to satisfy the physical assumptions are directly related to errors in the photometric decompositions. One fairly common failure was galaxies that were too nearly face on to obtain reliable rotation curves. As an example here is the galaxy with the lowest estimated rotation velocity in the sample, mangaid 1-135054 (plateifu 8550-12703):

8550-12703_vf_vrot
Mangaid 1-135054 (plateifu 8550-12703). (L) Measured velocity field and (R) posterior predictive estimate of circular velocity with 95% confidence band.

Besides showing no sign of rotation the velocity field hints at possible large scale, low velocity outflow in the central region. There are also a few apparent outliers, although these had little effect on model results. Fortunately the model output gives us plenty of clues that the results shouldn’t be trusted. The median circular velocity estimate is unrealistically low with very large posterior uncertainty (above right), while the posterior marginal density for cos_ihas a mode near 1 and also very large uncertainty (below).

8550-12703_post_cosi
Mangaid 1-135054 (plateifu 8550-12703). Posterior distribution of cosine of disk inclination.

Zooming out a bit on SDSS imaging gives a likely explanation for the peculiar velocity field. The elliptical galaxy just to the NW has nearly the same redshift (the velocity difference is ∼75 km/sec) and is almost certainly interacting with our target.

MaNGA target and companion; credit SDSS

A key assumption in using photometric properties as proxies for kinematic quantities is that disk galaxies have intrinsically circular surface brightness profiles. This is never quite the case in practice and sometimes morphological features like strong bars can make this assumption catastrophically wrong. Here was perhaps the most extreme example in DR14:

mangaid 1-185287 (plateifu 8252-12704). SDSS thumbnail with IFU overlay
8252-12704_vf
mangaid 1-185287 (plateifu 8252-12704) Measured velocity field from stacked RSS spectra

The photometric major axis angle was estimated to be 98.4o, that is just south of east, while the position angle of the maximum recession velocity is close to due south. When I first examined this velocity field I had a hard time reconciling it with rotation in a thin disk. This was before I learned how to do Voronoi binning though. The image below shows the binned velocity field (with a target S/N of 6.25). This shows that the relative velocity does increase on a roughly north to south line, indicating that this is indeed a rotating disk galaxy.

8252-12704_vf_binned
mangaid 1-185287 (plateifu 8252-12704) Measured velocity field from binned RSS spectra. Black arrow indicates major axis position angle from photometry. Gray arrow: position angle of receding side from velocity model with prior guess of 180o

I mentioned in an earlier post that without a proper prior on the kinematic position angle phi these models are inherently multi-modal (in fact they are infinitely modal and therefore would have improper posteriors). The solution to that of course is to have a proper prior. But if the prior is seriously in error the posterior estimates for the components of the velocity will end up scrambled as can be seen in the top row of the graph below, which shows the posterior distributions of the circular and “expansion” velocities1Remember that the photometric position angle is determined modulo π while the direction to the maximum recession velocity is measured modulo 2π. We don’t care about a π radian error in the prior though because that just flips the signs of the velocity components, which causes no sampling issues and is trivially fixable in the generated quantities block. It’s smaller errors that cause problems.

The obvious solution to a bad prior is to correct it2Using the data you’re trying to model to establish a prior is, technically, cheating (the polite term is “Empirical Bayes”), but this seems a relatively benign use of the data., which is easy enough. The bottom row shows the results of re-running the model with a prior phicentered on 180o and the same data. Now both the circular and expansion velocity curves are at least plausible. The posterior mean of phiis ≈185o, which is very close to correct as can be seen in the binned velocity field shown above.

8252-12704_rot_exp_guessphi_guess180
mangaid 1-185287 (plateifu 8252-12704) Top row: model rotation (L) and expansion (R) velocities with prior for major axis angle taken from photometry (phi = 98.4). Bottom row: same but with prior phi = 180o.

One more example. Bars were the most common cause of misleading photometric decompositions, but not the only one. Some galaxies are just asymmetrical. Here is the one that had the largest offset between the photometric and kinematic position angles:

mangaid 1-201291 (plateifu 8145-6103) – SDSS thumbnail with IFU footprint overlay

And the velocity field (again this agrees well with the Marvin measurements of the data cube):

8145-6103_velocityfield
mangaid 1-201291 (plateifu 8145-6103). Velocity field from stacked RSS spectra with major axis angle from nsa_elpetro_phi

This time the velocity field looks unremarkable, but again because of the prior the estimated circular and expansion velocities are scrambled together. And once again also, changing the prior to be centered on the approximate actual position angle of the receding side produces reasonable estimates for both:

mangaid 1-201291 (plateifu 8145-6103).
mangaid 1-201291 (plateifu 8145-6103). Top: Posterior predictive distributions of circular and expansion velocity with prior on `phi` from photometry. Bottom: Same with prior centered on -10o

Next up, I’ll take a more holistic look at final sample selection, and maybe get to results.

Revisiting the Baryonic Tully-Fisher relation with DR15 data

As I mentioned in the previous two posts SDSS Data Release 15 went public back in December and a query for “normal” disk galaxies as judged by Galaxy Zoo 2 classifiers returned 588 hits. I’ve finally run the GP velocity models on all of the new data and made a second run on around 40 that were contaminated by foreground stars or neighboring galaxies. So far I haven’t found an alternative to selecting these by eye and doing the masking manually, so that’s an error prone process. The month+ long gap between postings by the way was due to travel — my computer wasn’t grinding away on these models for all that time. As I mentioned last post the sampling properties including execution time of the GP model with arctangent mean function are usually quite favorable using Stan. The median wall time for these runs was about a minute, with a range from 25 to 1600 seconds. All model runs used 500 warmup iterations and 500 post-warmup with 4 chains run in parallel. This is more than enough for inference.

Before I discuss the results I’ll show them. As I did for the first pass at this way back in July I retrieved stellar mass and uncertainty estimates made by the MPA-JHU group from CasJobs; all but a handful also have mass estimates from the Wisconsin group. I may look at those later but don’t anticipate any very significant differences.

There are now at least two plausible choices for reference circular velocities: the velocity at a fiducial radius, and again I choose 1.5 effective radii since the MaNGA IFUs are meant to cover out to that radius in the primary sample. The other obvious choice is the asymptotic velocity vc in the arctangent mean function. This seems in principle to be the better choice since it estimates the circular velocity in the flat part of the rotation curve, but it might be a considerable extrapolation from the actual data in some cases.

Both sets of results are shown below for all model runs that ran to completion (N=582). Plotted are the median, 2.5, and 97.5 percentiles (≈ ± 2σ) of posterior predictions for the log-circular velocity at 1.5reff (top graph) and the same quantiles of the posteriors of the parameter v_c (bottom graph). These are plotted against the median, 16, and 84 percentiles (≈ ± 1σ) of the total stellar mass estimates per the MPA group.

logvr_mstar_dr15
Estimated rotation velocity at 1.5 effective radii vs. stellar mass estimate from MPA-JHU models
logvc_mstar_dr15
Estimated asymptotic rotation velocity against stellar mass from MPA-JHU models. Vertical error bars mark the 2.5 and 97.5 percentiles of the model posteriors of (log) velocity in km/sec. Horizontal error bars mark the 16 and 84 percentiles model posteriors of (log) stellar mass.

Evidently most of the sample follows a tight linear relationship with either measure of circular velocity, but there are some apparent outliers as well. I’m feeling a bit blocked right now, so I’ll end the post here. Next time I’ll look at some of the causes of model failure, what to do about them, and get to the results.