Here’s an SDSS finder chart image of one of the two grand design spirals that found its way into my “transitional” galaxy sample:
Central galaxy: Mangaid 1-382712 (plateifu 9491-6101), aka CGCG 088-005
and here’s a zoomed in thumbnail with IFU footprint:
plateifu 9491-6101 IFU footprint
This galaxy is in a compact group and slightly tidally distorted by interaction with its neighbors, but is otherwise a fairly normal star forming system. I picked it because I had a recent set of model runs and because it binned to a manageable but not too small number of spectra (112). The fits to the data in the first set of model runs were good and the (likely) AGN doesn’t have broad lines. Here are a few selected results from the set of model runs using the modified Calzetti attenuation relation on the same binned spectra. First, using a tight prior on the slope parameter δ had the desired effect of returning the prior for δ when Ï„ was near zero, while the marginal posterior for Ï„ was essentially unchanged:
Estimated optical depth for modified Calzetti attenuation vs. unmodified. MaNGA plateifu 9491-6101
At larger optical depths the data do constrain both the shape of the attenuation curve and optical depth. At low optical depth the posterior uncertainty in δ is about the same as the prior, while it decreases more or less monotonically for higher values of τ. A range of (posterior mean) values of δ from slightly shallower than a Calzetti relation to somewhat steeper. The general trend is toward a steeper relation with lower optical depth in the dustier regions (per the models) of the galaxy.
Posterior marginal standard deviation of parameter δ vs. posterior mean optical depth. MaNGA plateifu 9491-6101
There’s an interesting pattern of correlations1astronmers like to call these “degeneracies,” and it’s fairly well known that they exist among attenuation, stellar age, stellar mass, and other properties here, some of which are summarized in the sequence of plots below. The main result is that a steeper (shallower) attenuation curve requires a smaller (larger) optical depth to create a fixed amount of reddening, so there’s a negative correlation between the slope parameter δ and the change in optical depth between the modified and unmodified curves. A lower optical depth means that a smaller amount of unattenuated light, and therefore lower stellar mass, is needed to produce a given observed flux. so there’s a negative correlation between the slope parameter and stellar mass density or a positive correlation between optical depth and stellar mass. The star formation rate density is correlated in the same sense but slightly weaker. In this data set both changed by less than about ± 0.05 dex.
(TL) change in optical depth (modified – unmodified calzetti) vs. slope parameter δ (TR) change in stellar mass density vs. δ (BL) change in stellar mass density vs. change in optical depth (BR) change in SFR density vs change in optical depth Note: all quantities are marginal posterior mean estimagtes.
Here are a few relationships that I’ve shown previously. First between stellar mass and star formation rate density. The points with error bars (which are 95% marginal credible limits) are from the modified Calzetti run, while the red points are from the original. Regions with small stellar mass density and low star formation rate have small attenuation as well in this system, so the estimates hardly differ at all. Only at the high end are differences about as large as the nominal uncertainties.
SFR density vs. stellar mass density. Red points are point estimates for unmodified Calzetti attenuation. MaNGA plateifu 9491-6101
Finally, here is the relation between Hα luminosity density and star formation rate with the former corrected for attenuation using the Balmer decrement. The straight line is, once again, the calibration of Moustakas et al. Allowing the shape of the attenuation curve to vary has a larger effect on the luminosity correction than it does on the SFR estimates, but both sets of estimates straddle the line with roughly equal scatter.
SFR density vs. Hα luminosity density. Red points are point estimates for unmodified Calzetti attenuation.
MaNGA plateifu 9491-6101
To conclude for now, adding the more flexible attenuation prescription proposed by Salim et al. has some quantitative effects on model posteriors, but so far at least qualitative inferences aren’t significantly affected. I haven’t yet looked in detail at star formation histories or at effects on metallicity or metallicity evolution. I’ve been skeptical that SFH modeling constrains stellar metallicity or (especially) metallicity evolution well, but perhaps it’s time to take another look.
Here’s a problem that isn’t a huge surprise but that I hadn’t quite anticipated. I initially chose, without a lot of thought, a prior \(\delta \sim \mathcal{N}(0, 0.5)\) for the slope parameter delta in the modified Calzetti relation from the last post. This seemed reasonable given Salim et al.’s result showing that most galaxies have slopes \(-0.1 \lesssim \delta \lesssim +1\) (my wavelength parameterization reverses the sign of \(\delta\) ). At the end of the last post I made the obvious comment that if the modeled optical depth is near 0 the data can’t constrain the shape of the attenuation curve and in that case the best that can be hoped for is the model to return the prior for delta. Unfortunately the actual model behavior was more complicated than that for a spectrum that had a posterior mean tauv near 0:
Pairs plot of parameters tauv and delta, plus summed log likelihood with “loose” prior on delta.
Although there weren’t any indications of convergence issues in this run experts in the Stan community often warn that “banana” shaped posteriors like the joint distribution of tauv and delta above are difficult for the HMC algorithm in Stan to explore. I also suspect the distribution is multi-modal and at least one mode was missed, since the fit to the flux data as indicated by the summed log-likelihood is slightly worse than the unmodified Calzetti model.
A value of \(\delta\) this small actually reverses the slope of the attenuation curve, making it larger in the red than in the blue. It also resulted in a stellar mass estimate about 0.17 dex larger than the original model, which is well outside the statistical uncertainty:
Stellar mass estimates for loose and tight priors on delta + unmodified Calzetti attenuation curve
I next tried a tighter prior on delta of 0.1 for the scale parameter, with the following result:
Pairs plot of parameters tauv and delta, plus summed log likelihood with “tight” prior on delta.
Now this is what I hoped to see. The marginal posterior of delta almost exactly returns its prior, properly reflecting the inability of the data to say anything about it. The posterior of tauv looks almost identical to the original model with a very slightly longer tail:
Pairs plot of parameters tauv and summed log likelihood, unmodified Calzetti attenuation curve
So this solves one problem but now I must worry that the prior is too tight in general since Salim’s results predict a considerably larger spread of slopes. As a first tentative test I ran another spectrum from the same spiral galaxy (this is mangaid 1-382712) that had moderately large attenuation in the original model (1.08 ± 0.04) with both “tight” and “loose” priors on delta with the following results:
Joint posterior of parameters tau and delta with “tight” priorJoint posterior of parameters tau and delta with “loose” prior
The distributions of tauv look nearly the same, while delta shrinks very slightly towards 0 with a tight prior, but with almost the same variance. Some shrinkage towards Calzetti’s original curve might be OK. Anyway, on to a larger sample.
I finally got around to reading a paper by Salim, Boquien, and Lee (2018), who proposed a simple modification to Calzetti‘s well known starburst attenuation relation that they claim accounts for most of the diversity of curves in both star forming and quiescent galaxies. For my purposes their equation 3, which summarizes the relation, can be simplified in two ways. First, for mostly historical reasons optical astronomers typically quantify the effect of dust with a color excess, usually E(B-V). If the absolute attenuation is needed, which is certainly the case in SFH modeling, the ratio of absolute to “selective” attenuation, usually written as \(\mathrm{R_V = A_V/E(B-V)}\) is needed. Parameterizing attenuation by color excess adds an unnecessary complication for my purposes. Salim et al. use a family of curves that differ only in a “slope” parameter \(\delta\) in their notation. Changing \(\delta\) changes \(\mathrm{R_V}\) according to their equation 4. But I have always parametrized dust attenuation by optical depth at V, \(\tau_V\) so that the relation between intrinsic and observed flux is
Note that parameterizing by optical depth rather than total attenuation \(\mathrm{A_V}\) is just a matter of taste since they only differ by a factor 1.08. The wavelength dependent part of the relationship is the same.
The second simplification results from the fact that the UV or 2175Ã… “bump” in the attenuation curve will never be redshifted into the spectral range of MaNGA data and in any case the subset of the EMILES library I currently use doesn’t extend that far into the UV. That removes the bump amplitude parameter and the second term in Salim et al.’s equation 3, reducing it to the form
The published expression for \(k(\lambda)\) in Calzetti et al. (2000) is given in two segments with a small discontinuity due to rounding at the transition wavelength of 6300Ã…. This produces a small unphysical discontinuity when applied to spectra, so I just made a polynomial fit to the Calzetti curve over a longer wavelength range than gets used for modeling MaNGA or SDSS data. Also, I make the wavelength parameter \(y = 5500/\lambda\) instead of using the wavelength in microns as in Calzetti. With these adjustments Calzetti’s relation is, to more digits than necessary:
Note that δ has the opposite sign as in Salim et al. Here is what the curve looks like over the observer frame wavelength range of MaNGA. A positive value of δ produces a steeper attenuation curve than Calzetti’s, while a negative value is shallower (grayer in common astronomers’ jargon). Salim et al. found typical values to range between about -0.1 and +1.
Calzetti attenuation relation with modification proposed by Salim, Boquien, and Lee (2018). Dashed lines show shift in curve for their parameter δ = ± 0.3.
For a first pass attempt at modeling some real data I chose a spectrum from near the northern nucleus of Mrk 848 but outside the region of the highest velocity outflows. This spectrum had large optical depth \(\tau_V \approx 1.52\) and the unusual nature of the galaxy gave reason to think its extinction curve might differ significantly from Calzetti’s.
Encouragingly, the Stan model converged without difficulty and with an acceptable run time. Below are some posterior plots of the two attenuation parameters and a comparison to the optical depth parameter in the unmodified Calzetti dust model. I used a fairly informative prior of Normal(0, 0.5) for the parameter delta. The data actually constrain the value of delta, since its posterior marginal density is centered around -0.06 with a standard deviation of just 0.02. In the pairs plot below we can see there’s some correlation between the posteriors of tauv and delta, but not so much as to be concerning (yet).
(TL) marginal posteriors of optical depth parameter for Calzetti (red) and modified (dark gray) relation.
(TR) parameter δ in modified Calzetti relation
(BL) pairs plot of `tauv` and `delta`
(BR) trace plots of `tauv` and `delta`
Overall the modified Calzetti model favors a slightly grayer attenuation curve with lower absolute attenuation:
Total attenuation for original and modified Calzetti relations. Spectrum was randomly selected near the northern nucleus or Mrk 848.
Here’s a quick look at the effect of the model modification on some key quantities. In the plot below the red symbols are for the unmodified Calzetti attenuation model, and gray or blue the modified one. These show histograms of the marginal posterior density of total stellar mass, 100Myr averaged star formation rate, and specific star formation rate. Because the modified model has lower total attenuation it needs fewer stars, so the lower stellar mass (by ≈ 0.05 dex) is a fairly predictable consequence. The star formation rate is also lower by a similar amount, making the estimates of specific star formation rate nearly identical.
The lower right pane compares model mass growth histories. I don’t have any immediate intuition about how the difference in attenuation models affects the SFH models, but notice that both show recent acceleration in star formation, which was a main theme of my Markarian 848 posts.
Stellar mass, SFR,, SSFR, and mass growth histories for original and modified Calzetti attenuation relation.
So, this first run looks ok. Of course the problem with real data is there’s no way to tell if a model modification actually brings it closer to reality — in this case it did improve the fit to the data a little bit (by about 0.2% in log likelihood) but some improvement is expected just from adding a parameter.
My concern right now is that if the dust attenuation is near 0 the data can’t constrain the value of δ. The best that can happen in this situation is for the model to return the prior. Preliminary investigation of a low attenuation spectrum (per the original model) suggests that in fact a tighter prior on delta is needed than what I originally chose.
One more post making use of the measurement error model introduced last time and then I think I move on. I estimate the dust attenuation of the starlight in my SFH models using a Calzetti attenuation relation parametrized by the optical depth at V (Ï„V). If good estimates of Hα and Hβ emission line fluxes are obtained we can also make optical depth estimates of emission line regions. Just to quickly review the math we have:
\(A_\lambda = \mathrm{e}^{-\tau_V k(\lambda)}\)
where \(k(\lambda)\) is the attenuation curve normalized to 1 at V (5500Ã…) and \(A_\lambda\) is the fractional flux attenuation at wavelength λ. Assuming an intrinsic Balmer decrement of 2.86, which is more or less the canonical value for H II regions, the estimated optical depth at V from the observed fluxes is:
The SFH models return samples from the posteriors of the emission lines, from which are calculated sample values of \(\tau_V^{bd}\).
Here is a plot of the estimated attenuation from the Balmer decrement vs. the SFH model estimates for all spectra from the 28 galaxy sample in the last two posts that have BPT classifications other than no or weak emission. Error bars are ±1 standard deviation.
τVbd vs. τVstellar for 962 binned spectra in 28 MaNGA galaxies. Cloud of lines is from fit described in text. Solid and dashed lines are 1:1 and 2:1 relations.
It’s well known that attenuation in emission line regions is larger than that of the surrounding starlight, with a typical reddening ratio of ∼2 (among many references see the review by Calzetti (2001) and Charlot and Fall (2000)). One thing that’s clear in this plot that I haven’t seen explicitly mentioned in the literature is that even in the limit of no attenuation of starlight there typically is some in the emission lines. I ran the regression with measurement error model on this data set, and got the estimated relationship \(\tau_V^{bd} = 0.8 (\pm 0.05) + 1.7 ( \pm 0.09) \tau_V^{stellar}\) with a rather large estimated scatter of ≈ 0.45. So the slope is a little shallower than what’s typically assumed. The non-zero intercept seems to be a robust result, although it’s possible the models are systematically underestimating Hβ emission. I have no other reason to suspect that, though.
The large scatter shouldn’t be too much of a surprise. The shape of the attenuation curve is known to vary between and even within galaxies. Adopting a single canonical value for the Balmer decrement may be an oversimplification too, especially for regions ionized by mechanisms other than hot young stars. My models may be overdue for a more flexible prescription for attenuation.
The statistical assumptions of the measurement error model are a little suspect in this data set as well. The attenuation parameter tauv is constrained to be positive in the models. When it wants to be near 0 the samples from the posterior will pile up near 0 with a tail of larger values, looking more like draws from an exponential or gamma distribution than a gaussian. Here is an example from one galaxy in the sample that happens to have a wide range of mean attenuation estimates:
Histograms of samples from the marginal posterior distributions of the parameter tauv for 4 spectra from plateifu 8080-3702.
I like theoretical quantile-quantile plots better than histograms for this type of visualization:
Normal quantile-quantile plots of samples from the marginal posterior distributions of the parameter tauv for 4 spectra from plateifu 8080-3702.
I haven’t looked at the distributions of emission line ratios in much detail. They might behave strangely in some cases too. But regardless of the validity of the statistical model applied to this data set it’s apparent that there is a fairly strong correlation, which is encouraging.
I’m going to close out my analysis of Mrk 848 for now with three topics. First, dust. Like most SED fitting codes mine produces an estimate of the internal attenuation, which I parameterize with Ï„V, the optical depth at V assuming a conventional Calzetti attenuation curve. Before getting into a discussion for context here is a map of the posterior mean estimate for the higher S/N target binning of the data. For reference isophotes of the synthesized r band surface brightness taken from the MaNGA data cube are superimposed:
Map of posterior mean of τV from single component dust model fits with Calzetti attenuation
This compares reasonably well with my visual impression of the dust distribution. Both nuclei have very large dust optical depths with a gradual decline outside, while the northern tidal tail has relatively little attenuation.
The paper by Yuan et al. that I looked at last time devoted most of its space to different ways of modeling dust attenuation, ultimately concluding that a two component dust model of the sort advocated by Charlot and Fall (2000) was needed to bring results of full spectral fitting using ppxf on the same MaNGA data as I’ve examined into reasonable agreement with broad band UV-IR SED fits.
There’s certainly some evidence in support of this. Here is a plot I’ve shown for some other systems of the estimated optical depth of the Balmer emission line emitting regions based on the observed vs. theoretical Balmer decrement (I’ve assumed an intrinsic Hα/Hβ ratio of 2.86 and a Calzetti attenuation relation) plotted against the optical depth estimated from the SFH models, which roughly estimates the amount of reddening needed to fit the SSP model spectra to the observed continuum. In some respects this is a more favorable system than some I’ve looked at because Hβ emission is at measurable levels throughout. On the other hand there is clear evidence that multiple ionization mechanisms are at work, so the assumption of a single canonical value of Hα/Hβ is likely too simple. This might be a partial cause of the scatter in the modeled relationship, but it’s encouraging that there is a strong positive correlation (for those who care, the correlation coefficient between the mean values is 0.8).
The solid line in the graph below is 1:1. The semi-transparent cloud of lines are the sampled relationships from a Bayesian errors in variables regression model. The mean (and marginal 1σ uncertainty) is \(\tau_{V, bd} = (0.94\pm 0.11) + (1.21 \pm 0.12) \tau_V\). So the estimated relationship is just a little steeper than 1:1 but with an offset of about 1, which is a little different from the Charlot & Fall model and from what Yuan et al. found, where the youngest stellar component has about 2-3 times the dust attenuation as the older stellar population. I’ve seen a similar not so steep relationship in every system I’ve looked at and don’t know why it differs from what is typically assumed. I may look into it some day.
τV estimated from Balmer decrement vs. τV from model fits. Straight line is 1:1 relation. Cloud of lines are from Bayesian errors in variables regression model.
I did have time to run some 2 dust component SFH models. This is a very simple extension of the single component models: a single optical depth is applied to all SSP spectra. A second component with the optical depth fixed at 1 larger than the bulk value is applied only to the youngest model spectra, which recall were taken from unevolved SSPs from the updated BC03 library. I’m just going to show the most basic results from the models for now in the form of maps of the SFR density and specific star formation rate. Compared to the same maps displayed at the end of the last post there is very little difference in spatial variation of these quantities. The main effect of adding more reddened young populations to the model is to replace some of the older light — this is the well known dust-age degeneracy. The average effect was to increase the stellar mass density (by ≈ 0.05 dex overall) while slightly decreasing the 100Myr average SFR (by ≈ 0.04 dex), leading to an average decrease in specific star formation rate of ≈ 0.09 dex. While there are some spatial variations in all of these quantities no qualitative conclusion changes very much.
Results from fits with 2 component dust models.
(L) SFR density.
(R) Specific SFR
Contrary to Yuan+ I don’t find a clear need for a 2 component dust model. Without trying to replicate their results I can’t say why exactly we disagree, but I think they erred in aggregating the MaNGA data to the lowest spatial resolution of the broad band photometric data they used, which was 5″ radius. There are clear variations in physical conditions on much smaller scales than this.
Second topic: the most widely accepted SFR indicator in visual wavelengths is Hα luminosity. Here is another plot I’ve displayed previously: a simple scatterplot of Hα luminosity density against the 100Myr averaged star formation rate density from the SFH models. Luminosity density is corrected for attenuation estimated from the Balmer decrement and for comparison the light gray points are the uncorrected values. Points are color coded by BPT class determined in the usual manner. The straight line is the Hα – SFR calibration of Moustakas et al. (2006), which in turn is taken from earlier work by Kennicutt.
Model SFR density vs. Hα luminosity density corrected for extinction estimated from Balmer decrement. Light colored points are uncorrected for extinction. Straight line is Hα-SFR calibration from Moustakas et al. (2006)
Keeping in mind that Hα emission tracks star formation on timescales of ∼10 Myr1to the extent that ionization is caused by hot young stars. There are evidently multiple ionizing sources in this system, but disentangling their effects seems hard. Note there’s no clear stratification by BPT class in this plot. this graph strongly supports the scenario I outlined in the last post. At the highest Hα luminosities the SFR-Hα trend nicely straddles the Kennicutt-Moustakas calibration, consistent with the finding that the central regions of the two galaxies have had ∼constant or slowly rising star formation rates in the recent past. At lower Hα luminosities the 100Myr average trends consistently above the calibration line, implying a recent fading of star formation.
The maps below add some detail, and here the perceptual uniformity of the viridis color palette really helps. If star formation exactly tracked Hα luminosity these two maps would look the same. Instead the northern tidal tail in particular and the small piece of the southern one within the IFU footprint are underluminous in Hα, again implying a recent fading of star formation in the peripheries.
(L) Hα luminosity density, corrected for extinction estimated by Balmer decrement.
(R) SFR density (100 Myr average).
Final topic: the fit to the data, and in particular the emission lines. As I’ve mentioned previously I fit the stellar contribution and emission lines simultaneously, generally assuming separate single component gaussian velocity dispersions and a common system velocity offset. This works well for most galaxies, but for active galaxies or systems like this one with complex velocity profiles maybe not so much. In particular the northern nuclear region is known to have high velocity outflows in both ionized and neutral gas due presumably to supernova driven winds. I’m just going to look at the central fiber spectrum for now. I haven’t examined the fits in detail, but in general they get better outside the immediate region of the center. First, here is the fit to the data using my standard model. In the top panel the gray line, which mostly can’t be seen, is the observed spectrum. Blue are quantiles of the posterior mean fit — this is actually a ribbon, although its width is too thin to be discernable. The bottom panel are the residuals in standard deviations. Yes, they run as large as ±50σ, with conspicuous problems around all emission lines. There are also a number of usually weak emission lines that I don’t track that are present in this spectrum.
Fit to central fiber spectrum; model with single gaussian velocity distributions.
I have a solution for cases like this which I call partially parametric. I assume the usual Gauss-Hermite form for the emission lines (as in, for example, ppxf) while the stellar velocity distribution is modeled with a convolution kernel2I think I’ve discussed this previously but I’m too lazy to check right now. If I haven’t I’ll post about it someday. Unfortunately the Stan implementation of this model takes at least an order of magnitude longer to execute than my standard one, which makes its wholesale use prohibitively expensive. It does materially improve the fit to this spectrum although there are still problems with the stronger emission lines. Let’s zoom in on a few crucial regions of the spectrum:
Zoomed in fit to central fiber spectrum using “partially parametric velocity distribution” model. Grey: observed flux. Blue: model.
The two things that are evident here are the clear sign of outflow in the forbidden emission lines, particularly [O III] and [N II], while the Balmer lines are relatively more symmetrical as are the [S II] doublet at 6717, 6730Å. The existence of rather significant residuals is likely because emission is coming from at least two physically distinct regions while the fit to the data is mostly driven by Hα, which as usual is the strongest emission line. The fit captures the emission line cores in the high order Balmer lines rather well and also the absorption lines on the blue side of the 4000Å break except for the region around the [Ne III] line at 3869Å.
I’m mostly interested in star formation histories, and it’s important to see what differences are present. Here is a comparison of three models: my standard one, the two dust component model, and the partially parametric velocity dispersion model:
Detailed star formation history models for the northern nucleus using 3 different models.
In fact the differences are small and not clearly outside the range of normal MCMC variability. The two dust model slightly increases the contribution of the youngest stellar component at the expense of slightly older contributors. All three have the presumably artifactual uptick in SFR at 4Gyr and very similar estimated histories for ages >1 Gyr.
I still envision a number of future model refinements. The current version of the official data analysis pipeline tracks several more emission lines than I do at present and has updated wavelengths that may be more accurate than the ones from the SDSS spectro pipeline. It might be useful to allow at least two distinct emission line velocity distributions, with for example one for recombination lines and one for forbidden. Unfortunately the computational expense of this sort of generalization at present is prohibitive.
I’m not impressed with the two dust model that I tried, but there may still be useful refinements to the attenuation model to be made. A more flexible form of the Calzetti relation might be useful for example3there is recent relevant literature on this topic that I’m again too lazy to look up.
My initial impression of this system was that it was a clear false positive that was selected mostly because of a spurious BPT classification. On further reflection with MaNGA data available it’s not so clear. A slight surprise is the strong Balmer absorption virtually throughout the system with evidence for a recent shut down of star formation in the tidal tails. A popular scenario for the formation of K+A galaxies through major mergers is that they experience a centrally concentrated starburst after coalescence which, once the dust clears and assuming that some feedback mechanism shuts off star formation leads to a period of up to a Gyr or so with a classic K+A signature4I previously cited Bekki et al. 2005, who examine this scenario in considerable detail.Capturing a merger in the instant before final coalescence provides important clues about this process.
To the best of my knowledge there have been no attempts at dynamical modeling of this particular system. There is now reasonably good kinematic information for the region covered by the MaNGA IFU, and there is good photometric data from both HST and several imaging surveys. Together these make detailed dynamical modeling technically feasible. It would be interesting if star formation histories could further constrain such models. Being aware of the multiple “degeneracies” between stellar age and other physical properties I’m not highly confident, but it seems provocative that we can apparently identify distinct stages in the evolutionary history of this system.
In this post I’m going to take a quick look at detailed, spatially resolved star formation histories for this galaxy pair and briefly compare to some of the recent literature. Before I start though, here is a reprocessed version of the blog’s cover picture with a different crop and some Photoshop curve and level adjustments to brighten the tidal tails a bit. Also I cleaned up some more of the cosmic ray hits.
Markarian 848 – full resolution crop with level curve adjustment
HST ACS/WFC F435W/F814W/F160 false color image
The SFH models I’m going to discuss were based on MaNGA data binned to a more conservative signal to noise target than in the last post. I set a target S/N of 25 — Cappellari’s Voronoi binning algorithm has a hard coded acceptance threshold of 80% of the target S/N, which results in all but a few bins having an actual S/N of at least 20. This produced a binned data set with 63 bins. The map below plots the modeled (posterior mean log) stellar mass density, with clear local peaks in the bins covering the positions of the two nuclei. The bins are numbered in order of increasing distance of the weighted centroids from the IFU center, which corresponds with the position of the northern nucleus. The center of bin 1 is slightly offset from the northern nucleus by about 2/3″. For reference the angular scale at the system redshift (z ≈ 0.040) is 0.8 kpc/” and the area covered by each fiber is 2 kpc2.
Although there’s nothing in the Voronoi binning algorithm that guarantees this it did a nice job of partitioning the data into physically distinct regions, with the two nuclei, the area around the northern nucleus, and the bridge between them all binned to single fiber spectra. The tidal tails are sliced into several pieces while the very low surface brightness regions to the NE and SW get a few bins.
Stellar mass density and bin numbers ordered in increasing distance from northern nucleus
I modeled star formation histories with my largest subset of the EMILES library consisting of SSP model spectra for 54 time bins and 4 metallicity bins. As in previous exercises I fit emission lines and the stellar contribution simultaneously, which is riskier than usual in this system because some regions, especially near the northern nucleus, have complex velocity structures producing emission line profiles that are far from single component gaussians. I’ll look at this in more detail in a future post, but for now let’s just take these SFH models at face value and see where they lead. As I’ve done in several previous posts what’s plotted below are star formation rates in solar masses/year over cosmic time, counting backwards from “now” (now being when the light left the galaxies about 500Myr ago). This time I’ve used log scaling for both axes and for the first time in these posts I’ve displayed results for every bin ordered by increasing distance from the IFU center. The first two rows cover the central ~few kpc region surrounding the northern nucleus. The southern nucleus covered by bin 44 is in row 8, second from left. Both the time and SFR scales are the same for all plots. The black lines are median posterior marginal estimates and the ribbons are 95% posterior confidence intervals.
Star formation histories for the Voronoi binned regions shown in the previous map. Numbers at the top correspond to the bin numbers in the map.
Browsing through these, all regions show ∼constant or gradually decreasing star formation up to about 1 Gyr ago1You may recall I’ve previously noted there is always an uptick in star formation rate at 4 Gyr in my EMILES based models, and that’s seen in all of these as well. This must be spurious, but I still don’t know the cause.. This of course is completely normal for spiral galaxies.
Most regions covered by the IFU began to show accelerated and in some areas bursts of star formation at ∼1 Gyr. In more than half of the bins the maximum star formation rate occurred around 40-60 Myr ago, with a decline or in some areas cessation of star formation more recently. In the central few kpc around the northern nucleus on the other hand star formation accelerated rapidly beginning ~100 Myr ago and continues at high levels. The southern nucleus has a rather different estimated recent star formation history, with no (visible) starburst and instead gradually increasing SFR to a recent maximum. Ongoing star formation at measurable levels is much more localized in the southern central regions and weaker by factors of several than the northern companion.
Here’s a map that I thought would be too noisy to be informative, but turns out to be rather interesting. This shows the spatial distribution of the lookback time to the epoch of maximum star formation rate estimated by the marginal posterior mean. The units displayed are log(age) in Myr; recall that I added unevolved SSP models from the 2013 update of the BC03 models to the BaSTI based SSP library, assigning them an age of 10Myr, so a value of 1 here basically means ≈ now.
Look back time to epoch of maximum star formation rate as estimated by marginal posterior mean. Units are log(age in Myr).
To summarize, there have been three phases in the star formation history of this system: a long period of normal disk galaxy evolution; next beginning about 1 Gyr ago widespread acceleration of the SFR with localized bursts; and now, within the past 50-100 Myr a rapid increase in star formation that’s centrally concentrated in the two nuclei (but mostly the northern one) while the peripheral regions have had suppressed activity.
I previously reviewed some of the recent literature on numerical studies of galaxy mergers. The high resolution simulations of Hopkins et al. (2013) and the movies available online at http://www.tapir.caltech.edu/~phopkins/Site/Movies_sbw_mgr.html seem especially relevant, particularly their Milky Way and Sbc analogs. They predict a general but clumpy enhancement of star formation at around the time of first pericenter passage; a stronger, centrally concentrated starburst at the time of second pericenter passage, with a shorter period of separation before coalescence. Surprisingly perhaps, the merger timescales for both their MW and Sbc analogs are very similar to my SFH models, with ∼ 1 Gyr between first and second perigalactic passage and another few 10’s of Myr to final coalescence.
I’m going to wrap up with maps of the (posterior mean estimates) of star formation rate density (as \(\log10(\mathsf{M_\odot/yr/kpc^2})\)) and specific star formation rate, which has units log10(yr-1). These are 100Myr averages and recall are based solely on the SSP template contributions.
(L) Star formation rate density
(R) Specific star formation rate
Several recent papers have made quantitative estimates of star formation rates in this system, which I’ll briefly review. Detailed comparisons are somewhat difficult because both the timescales probed by different SFR calibrators differ and the spatial extent considered in the studies varies, so I’ll just summarize reported results and compare as best as I can.
Yuan et al. (2018) modeled broad band UV-IR SED’s taking data from GALEX, SDSS, and Spitzer; and also did full spectrum fitting using ppxf on the same MaNGA data I’ve been examining. They divided the data into 5″ (≈ 4 kpc) radius regions covering the tidal tails and each nucleus. The broadband data were fit with parametric SFH models with a pair of exponential bursts (one fixed at 13Gyr, the other allowed to vary). From the parametric models they estimated the SFR in the northern and southern nuclei as 76 and 11 \(\mathsf{M_\odot/yr}\) (the exact interpretation of this value is unclear to me). For comparison I get values of 33 and 13 \(\mathsf{M_\odot/yr}\) for regions within 4 kpc of the two nuclei by calculating the average (posterior mean) star forming density and multiplying by Ï€*42. They also calculated 100Myr average SFRs from full spectrum fitting with several different dust models and also from the Hα luminosity, deriving estimates that varied by an order of magnitude or more. Qualitatively we reach similar conclusions that the tails had earlier starbursts and are now forming stars at lower rates than their peak, and also that the northern nucleus has recent star formation a factor of several higher than the southern.
Cluver et al. (2017) were primarily trying to derive SFR calibrations for WISE W3/W4 filters using samples of normal star forming and LIRG/ULIRGs (including this system) with Spitzer and Herschel observations. Oddly, although they tabulate IR luminosities for the entire sample they don’t tabulate SFR estimates. But plugging into their equations 5 and 7 I get star formation rate estimates of just about 100 \(\mathsf{M_\odot/yr}\). These are global values for the entire system. For comparison I get a summed SFR for my models of ≈ 45 \(\mathsf{M_\odot/yr}\) (after making a 0.2 dex adjustment for fiber overlap).
Tsai and Hwang (2015) also used IR data from Spitzer and conveniently present results for quantities I track using the same units. Their estimates of the (log) stellar mass density, SFR density, and specific star formation rate in the central kpc of (presumably) the northern nucleus were 9.85±0.09, 0.43±0.00, and -9.39±0.09 respectively. For comparison in the fiber that covers the northern nucleus my models return 9.41±0.02, 0.57±0.04, and -8.84±0.04. For the 6 fibers closest to the nucleus the average stellar mass density drops to 9.17 and SFR density to about 0.39. So, our SFR estimates are surprisingly close while my mass density estimate is as much as 0.7 dex lower.
Finally, Vega et al. (2008) performed starburst model fits to broadband IR-radio data. They estimated a burst age of about 60Myr for this system, with average SFR over the burst of 225 \(\mathsf{M_\odot/yr}\) and current (last 10 Myr) SFR 87 \(\mathsf{M_\odot/yr}\) in a starforming region of radius 0.27kpc. Their model has optical depth of 33 at 1 μm, which would make their putative starburst completely invisible at optical wavelengths. Their calculations assumed a Salpeter IMF, which would add ≈0.2 dex to stellar masses and star formation rates compared to the Kroupa IMF used in my models.
Overall I find it encouraging that my model SFR estimates are no worse than factors of several lower than what are obtained from IR data — if the Vega+ estimate of the dust optical depth is correct most of the star formation is well hidden. Next time I plan to look at Hα emission and tie up other loose ends. If time permits before I have to drop this for a while I may look at two component dust models.
This galaxy1also known as VV705, IZw 107, IRAS F15163+4255, among others has been my feature image since I started this blog. Why is that besides that it’s kind of cool looking? As I’ve mentioned before I took a shot a few years ago at selecting a sample of “transitional” galaxies from the SDSS spectroscopic sample, that is ones that may be in the process of rapidly shutting down star formation2See for example Alatalo et al. (2017) for recent usage of this terminology.. I based the selection on a combination of strong Hδ absorption and either weak emission lines or line ratios other than pure starforming, using measurements from the MPA-JHU pipeline. This galaxy pair made the sample based on the spectrum centered on the northern nucleus (there is also a spectrum of the southern nucleus from the BOSS spectrograph, but no MPA pipeline measurements). Well now, these galaxies are certainly transitioning to something, but they’re probably not shutting down star formation just yet. Simulations of gas rich mergers generally predict a starburst that peaks around the time of final coalescence. There is also significant current star formation, as high as 100-200 \(M_\odot/yr\) per various literature estimates, although it is mostly hidden. So on the face of it at least this appears to be a false positive. This was also an early MaNGA target, and one of a small number where both nuclei of an ongoing merger are covered by a single IFU:
Markarian 848. SDSS image thumbnail with MaNGA IFU footprint.
Today I’m going to look at a few results of my analysis of the IFU data that aren’t too model dependent to get some insight into why this system was selected. As usual I’m looking at the stacked RSS data rather than the data cubes, and for this exercise I Voronoi binned the spectra to a very low target SNR of 6.25. This leaves most of the fibers unbinned in the vicinity of the two nuclei. First, here is a map of BPT classification based on the [O III]/Hβ vs. [N II]/Hα diagnostic as well as scatterplots of several BPT diagnostics. Unfortunately the software I use for visualizing binned maps extends the bins out to the edge of the bounding rectangle rather than the hexagonally shaped edge of the data, which is outlined in red. Also note that different color coding is used for the scatter plots than the map. The contour lines in the map are arbitrarily spaced isophotes of the synthesized R band image supplied with the data cube.
Map of BPT class determined from [O III]/Hβ vs. [N II}/Hα diagnostic diagram, and BPT diagnostics for [N II], [S II] and [O I]. Curves are boundary lines form Kewley et al. (2006).
What’s immediately obvious is that most of the area covered by the IFU including both nuclei fall in the so-called “composite” region of the [N II]/Hα BPT diagram. This gets me back to something I’ve complained about previously. There was never a clear physical justification for the composite designation (which recall was first proposed in Kauffmann et al. 2003), and the upper demarcation line between “pure” AGN and composite systems as shown in the graph at upper right was especially questionable. It’s now known if it wasn’t at the turn of the century (which I doubt is the case) that a number of ionization mechanisms can produce line ratios that fall generally in the composite/LINER regions of BPT diagnostics. Shocks in particular are important in ongoing mergers. High velocity outflow of both ionized and neutral gas have been observed in the northern nucleus by Rupke and Veillux (2013), which they attributed to supernova driven winds.
The evidence for AGN in either nucleus is somewhat ambiguous. Fu et al. (2018) called this system a binary AGN, but that was based on the “composite” BPT line ratios from the same MaNGA data as we are examining (their map, by the way, is nearly identical to mine; see also Yuan et al. 2018). By contrast Vega et al. 2008 were able to fit the entire SED from NIR to radio frequencies with a pure starburst model and no AGN contribution at all, while more recently Dietrich et al. 2018 estimated the AGN fraction to be around 0.25 from NIR to FIR SED fitting. A similar conclusion that both nuclei contain both a starburst and AGN component was reached by Vardoulaki et al. 2015 based on radio and NIR data. One thing I haven’t seen commented on in the MaNGA data that possibly supports the idea that the southern nucleus harbors an AGN is that the regions with unambiguous AGN-like optical emission line ratios are fairly symmetrically located on either side of the southern nucleus and separated from it by ∼1-2 kpc. This could perhaps indicate the AGN is obscured from our view but visible from other angles.
There are also several areas with starforming emission line ratios just to the north and east of the northern nucleus and scattered along the northern tidal tail (the southern tail is largely outside the IFU footprint). In the cutout below taken from a false color composite of the F435W+F814W ACS images several bright star clusters can be seen just outside the more heavily obscured nuclear region, and these are likely sources of ionizing photons.
Markarian 848 nuclei. Cutout from HST ACS F814W+F435W images.
Finally turning to the other component of the selection criteria, here is a map of the (pseudo) Lick HδA index and a plot of the widely used HδA vs Dn(4000) diagnostic. It’s a little hard to see a clear pattern in the map because this is a rather noisy index, but strong Balmer absorption is seen pretty much throughout, with the highest values outside the two nuclei and especially along the northern tidal tail.
Location in the Hδ – D4000 plane doesn’t uniquely constrain the star formation history, but the contour plot taken from a largish sample of SDSS spectra from the NGP is clearly bimodal, with mostly starforming galaxies at upper left and passively evolving ones at lower right, with a long “green valley” in between. Simple models of post-starburst galaxies will loop upwards and to the right in this plane as the starburst ages before fading towards the green valley. This is exactly where most of points in this diagram lie, which certainly suggests an interesting recent star formation history.
(L) Map of HδA index.
(R) HδA vs Dn(4000) index. Contours are for a sample of SDSS spectra from the north galactic pole.
I’m going to end with a bit of speculation. In simulations of gas rich major mergers the progenitors generally complete 2 or 3 orbits before final coalescence, with some enhancement of star formation during the perigalactic passages and perhaps some ebbing in between. This process plays out over hundreds of Myr to some Gyr. What I think we are seeing now is the 2nd or third encounter of this pair, with the previous encounters having left an imprint in the star formation history.
I’ve done SFH modeling for this binning of the data, and also for data binned to higher SNR and modeled with higher time resolution models. Next post I’ll look at these in more detail.
This post will be short. Eventually I’d like to add a reproducible real data example or two to the documentation of the software I just officially published. Even though I’m mostly working with MaNGA data these days it may not be practical to include an example since the smallest RSS file is ∼20MB and it can take days to fully process a single data set. The single fiber SDSS spectra on the other hand are only ~170-210KB in “lite” form. While I’m hunting around for suitable examples I’ll post a few results here. For today’s post a couple of randomly selected early type galaxies from my north galactic pole sample:
SDSS J121810.72+280230.4
Â
SDSS J122140.20+222108.3
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
These don’t seem to be in any historical catalogs, so I’ll just call them by their SDSS designations of J121810.72+280230.4 and J122140.20+222108.3, or J1218+2802 and J1221+2221 for shorthand. The first appears to me to be a garden variety elliptical; the second might be an S0 — neither has a morphological classification listed in NED (a majority of GZ classifiers in both GZ1 and GZ2 called the second an elliptical). Galaxy 2 has a redshift of z=0.023, which puts it in or near the Coma cluster. Galaxy 1 is well in the background at z=0.081.
I modeled the star formation histories with my standard tools and the “full” EMILES ssp library (that is all 54 time bins in 4 metallicity bins). The Stan models both completed without warnings (one required some adjustment of control parameters). Here are some basic results. First, star formation histories, displayed as star formation rate against lookback time. I find it can be useful to use linear or logarithmic scales on either or both axes. Here I show SFR scaled both ways:
Star formation histories for 2 passively evolving galaxies
So-called non-parametric models for star formation histories are hoped for gold standards for SFH modeling, and here we see part of the reason: these have rather different estimated mass assembly histories despite similar looking spectra (see below or follow the links), and neither estimate is particularly close to commonly chosen functional forms.
There are some oddities to beware of however. For example literally 100% of the models I’ve run with any variation of the EMILES library show an uptick in mean star formation rate at 4Gyr, independent of the absolute estimated SFR. This can’t be real obviously enough, but I haven’t yet found an error or other specific reason for it. There may be other persistent zigs and zags in SFR at specific times, and these can vary with the SSP library. Overall I assign no particular significance to small variations in mean estimated star formation rates, especially at early times. I’m somewhat more inclined to take seriously late time variations (say, most recent Gyr), which may be wishful thinking.
Â
The data aren’t inconsistent with a small amount of ongoing star formation (estimates of specific star formation rate averaged over 100Myr are shown — these have units of yr-1). SSFR’s in a range ~10-11 yr-1 are about an order of magnitude lower than an ordinary starforming galaxy in the local universe.
froPosterior distribution of specific star formation rate (100 Myr timescale)
Fits to the data are comparable, and satisfactory over most of the wavelength range of the spectra. A recent paper on arxiv (article id 1811.09799) mentioned some fitting issues with EMILES SSP libraries in the red. This can be seen here in the range ~6750-7500Ã… (rest frame). I don’t have an explanation either. It might be significant that the MILES stellar library spectra extend only to ~7300Å, with other data sources grafted on to the red extension, so there could be a calibration problem. On the other hand the prominent dip in this range is at least partly due to TiO absorption, so this could point to metal abundance variations not captured by EMILES.
Finally, I find it useful to model attenuation, even in ETGs that are often considered dust free. I just use a Calzetti attenuation curve in these models even though it was based on starburst galaxy SEDs and therefore may not be appropriate. This is a topic for possible future model refinement.
It took longer than I had hoped, but I finally have the R and Stan software that I use for star formation history modeling under version control and uploaded to github. There are 3 R packages, two of which are newly available:
spmutils: The actual code for maximum likelihood and Bayesian modeling of star formation histories. There are also tools for reading FITS files from SDSS single fiber spectra and from MaNGA, visualization, and analysis.
cosmo: A simple, lightweight cosmological distance calculator.
sfdmap: A medium resolution version of the all sky dust map of Schlegel, Finkbeiner and Davis (SFD) and a pair of functions for rapid access.
So far only cosmo is documented to minimal R standards. It’s also the most generally useful, at least if you need to calculate cosmological distances. sfdmap has skeleton help files; spmutils not even that. I plan to correct that, but probably not in the next several weeks.
The R packages are pure R code and can be installed directly from github using devtools:
The Stan files for SFH modeling that I’m currently working with are in spmutils/stan. I keep the Stan files in the directory ~/spmcode on my local machines, and that’s the default location in the functions that call them.
This code mushroomed over a period of several years without much in the way of design, so it’s not much use without some documentation of typical workflow. I will get around to that sometime soon (I hope).
In the last post I showed that the posterior distribution of the model parameters as estimated by Stan look rather different than the distribution of maximum likelihood estimates under repeated realizations of the noisy “observation” vector, but I didn’t really discuss why they look different. For starters it’s not because the Stan model failed to converge, nor is there any evidence for missed multimodality in the posterior. Although Stan threw some warnings for this model run they involved fewer than 1% of the post-warmup iterations and there was no sign of pathological behavior (by the way I recommend the package shinystan for a holistic look at Stan model outputs). Multimodality can happen — often it manifests as one or more chains converging to different distributions than the others, but sometimes Stan can fail to find extra modes. But I haven’t seen any evidence for this in repeated runs of this and similar models, nor for that matter in real data SFH modeling.I mentioned that I copied the prior over from my real data models and this is probably not quite what I wanted to use, and that gets us into the topic of investigating the effect of the prior in shaping the posterior. That’s not what caused the difference either, but I’ll look at it anyway.
The real reason I think is one that I still find paradoxical or at least sorta counterintuitive. The main task of computational Bayesian inference is to explore the “typical set” (a concept borrowed from information theory). There’s a nice case study by one of Stan’s principal developers showing through geometric arguments and some simple simulations that the typical set can be, and usually is, far (in the sense of Euclidean distance) from the most likely value and the distance increases with increasing dimensionality of the parameter space. We can show directly that is happening here.
Let’s first modify the model to adopt an improper uniform prior on the positive orthant, which just requires removing a line from the model block of the Stan code (if no prior is specified it is implicitly made uniform, which in this case is improper):
parameters {
vector<lower=0.>[M] beta;
}
model {
y ~ normal(X * beta, sdev);
}
Modern Bayesians usually recommend against the use of improper priors, but since the likelihood in this model is Gaussian the posterior is proper. The virtue of this model is the maximum a posteriori (MAP) solution is also the maximum likelihood solution. Stan can do general nonlinear optimization as well as sampling, using by the way conventional gradient based algorithms rather than Monte Carlo optimization.
First, I run the sampler on this model, and extract the posterior:
I do this to get a starting guess for the optimizer and because we’ll do some comparisons later. For some reason rstan’s optimization function requires as input a stanmodel object rather than a file name, so we call stan_model() to get one. Then I call the optimizer, adjusting a few parameters to try to get to a better solution than with defaults:
Now let’s use these to initialize the chains for another Stan model run (I also adjusted some of the control parameters to get rid of warnings from the first run):
Now examine trace plots for the first few warmup draws for a few parameters. The top rows trace the first 100 warmup iterations. The right 4 are initialized with the approximate maximum likelihood solution, the left are from the first randomly initialized run. The ML initialized chains quickly move away from that point and reach stationary, well mixed distributions in just a few iterations. The randomly initialized chains take slightly longer to get to the same stationary distributions.
Stan runs with improper uniform prior – sample traceplots (TL) warmup iterations with random inits (TR) warmup iterations with nnls solution inits (BL) post-warmup with random inits (BR) post-warmup with nnls solution inits
And the resulting interval plot for the model coefficients isn’t noticeably different from the first run:
Coefficient posteriors with improper uniform prior
Nowadays Bayesians generally advise using proper priors that reflect available domain knowledge. I had done that originally but without thinking that the prior wasn’t quite right given the data manipulations that I had performed. A better prior that incorporates “domain” knowledge without being too specific might be something like a truncated normal with the same scale parameter for each coefficient, which in this case can reasonably be set to 1. That’s accomplished by adding back just one line to the model block:
model {
beta ~ normal(0, 1.);
y ~ normal(X * beta, sdev);
}
which leads to the following posterior interval plot compared to the original:
(L) N(0, 1) prior (R) original variable width prior
See the difference? Close inspection will show some minor ones, but it’s not immediately clear if they’re larger than expected from sampling variations.
Can we induce sparsity?
Here are a couple ideas I thought might work but didn’t. One way to induce sparsity in regression models is called the lasso, or more technically L-1 penalized regression. This is just least squares with an added penalty proportional to the absolute values of the model parameters. A more or less equivalent idea in Bayesian statistics is to add a double exponential or Laplacian prior. Since the parameters here are non-negative we can use an exponential prior instead.
Yet another possible prior inspired by “domain knowledge” is to notice that the coefficient values I picked sum to 1. A sum to 1 constraint can be programmed in Stan by declaring the parameter vector to be a simplex. Leaving out an explicit prior will give the parameters implicitly uniform priors.
parameters {
simplex[M] beta;
}
model {
y ~ normal(X * beta, sdev);
}
Interestingly enough for possible future reference this model ran much faster than the others and threw no warnings, but it didn’t produce a sparse solution:
More posterior interval plots (L) exponential prior (R) unit simplex parameter vector
Back to the drawing board and with a bit of web research it turns out there is a fairly standard sparsity inducing prior known as the “horseshoe” described in yet another Stan case study by another principal developer, Michael Betancourt. The parameter and model block for this looks like
parameters {
vector<lower=0.>[M] beta;
real<lower=0.> tau;
vector<lower=0.>[M] lambda;
}
model {
tau ~ cauchy(0, 0.05);
lambda ~ cauchy(0, 1.);
beta ~ normal(0, tau*lambda);
y ~ normal(X * beta, sdev);
}
This is the “classical” horseshoe rather than the preferred prior that Betancourt refers to as the “Finnish horseshoe.” This took some fiddling with hyperparameter values to produce something sensible that appeared to converge, but it did indeed produce a sparse posterior:
Posterior intervals – Horseshoe prior
This run did find two posterior modes, and this illustrates the typical Stan behavior I mentioned at the beginning of this post. Two chains found a mode where beta[49] is large and beta[51] near 0, and two found one with the values reversed. There’s no jumping between modes within a chain:
A bimodal posterior
Finally, here are the distributions of log-likelihood for the 6 priors considered in the last two posts:
Posterior distributions of log-likelihood for 6 priors
Should we care about sparsity? In statistics and machine learning sparsity is usually considered a virtue because one often has a great many potential predictors, only a few of which are thought to have any real predictive value. In star formation history modeling on the other hand astronomers usually think of star formation as varying relatively smoothly over time, at least when averaged over galactic scales. Star formation occurring in a few discrete bursts widely separated over cosmic history seems unphysical, which makes interpreting (or for that matter believing) the results challenging.
One takeaway from this exercise is that in a data rich environment the details of the prior don’t matter too much provided it’s reasonable, which in the context of star formation history modeling specifically excludes one that induces sparsity — despite the fact that in this contrived example the sparsity inducing prior recovers the inputs accurately while generic priors do not. Of course maximum likelihood estimation does even better at recovering the inputs with far less computational expense.