tag:blogger.com,1999:blog-10745025151861280832024-03-04T23:33:18.051-05:00Hogg's IdeasIdeas I am unlikely to implement; presented here, free for the taking.Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.comBlogger41125tag:blogger.com,1999:blog-1074502515186128083.post-16791290640175419482019-06-08T07:27:00.000-04:002019-06-08T07:27:09.124-04:00detect the stars during the day<p>
The daytime sky (the blue sky) is bright but transparent. That means that the stars are visible during the day. Very good visual astronomers I know have once or twice pointed out Venus to me during the day. I'm not so good as that!</p>
<p>
I think it would be an interesting and valuable project for astronomy education, and maybe also image processing, to do the following: Take a large number of digital-camera images of the blue sky during some short period of one sunny day. Take these images and post-process them to detect the stars. That is, make an image of what you would have seen if the Sun had gone dark.</p>
<p>
There was a time in my life that I thought this project was easy. I have made some superficial attempts and I no longer think that. One idea I have (but haven't pursued) is to do this on a day that the Moon is visible, and use the Moon as a stacking reference. It isn't a perfect reference (it's a wandering star) but it's probably good enough for a first guess at alignment. Because something about this project depends on the ratio of intensity (sky brightness) to flux (star brightness), which have different units (!), I think this project must be easier to do with a telephoto lens.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-45856223550201958412018-02-10T11:42:00.001-05:002018-02-10T11:42:32.836-05:00apply FM demodulation to asteroseismic modes<p>
Simon J Murphy (Sydney) has made some <a href="https://arxiv.org/abs/1712.00022">great discoveries of binary companions</a> to delta-Scuti stars by looking at asteroseismic mode phase shifts: As the star orbits the system barycenter, the time-of-flight from star to Earth is modulated by the binary orbit. That appears as time delays in the asteroseismic oscillation mode phase. He can analyze each mode individually and see the same orbital distortion to every mode.</p>
<p>
Now think of the asteroseismic mode as a carrier frequency. Then the orbit is modulating the frequency of the mode. Murphy phrases things in terms of time delays (light-crossing time) but it can also be phrased in terms of redshift and blueshift of the mode. The light-travel time is just an integral of the Doppler shift. That Doppler shift modulates the frequency.</p>
<p>
So <i>these stars are like FM radios</i> with carrier frequencies at the asteroseismic mode frequencies, and the signal (the song they are playing) encoded as a continuous frequency shift. So let's apply the various methods for <a href="http://www.radio-electronics.com/info/rf-technology-design/fm-reception/fm-demodulation-detection-overview.php">FM-radio demodulation</a> to these signals and see the companions! This project is super straightforward: Get <i>Kepler</i> data; code up demodulator; run; plot.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-75558312347175939432017-07-09T08:24:00.000-04:002017-07-09T08:24:09.035-04:00Do co-eval stars have aligned spin vectors?<p>
My people tell me that at the <i>Kepler / K2</i> Science Conference this year, there was discussion of the possibility that stars that are born together have aligned spin vectors (does anyone have a reference for this?). The result is controversial because the spin vector is hard to measure! Indeed, there are only hints, and only of the projection of the spin vector onto the line of sight. But if there is any conceivable signal, I have two suggestions: The first is to replace the physics-driven model with a data-driven model: Are the asteroseismic signatures of co-eval stars more similar than those of randomly paired stars, and if so, do those similarities map onto anything interpretable? The second is to look at the <a href="https://arxiv.org/abs/1612.02440">comoving pairs of Semyeong Oh</a> and see if we see such similarities across any reasonable set of pairs. This leads me to ask: How many of the Oh pairs are there inside the <i>Kepler</i> field?</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-18908600673987459402016-08-02T05:40:00.000-04:002016-08-02T05:40:04.240-04:00infer the expansion of the Universe without distances<p>
As I note <a href="http://hoggresearch.blogspot.de/2016/07/the-universe-has-been-expanding-since.html">in this blog post</a>, it is possible to infer that the Universe is expanding, even if you have only velocities and no distances. The idea is that you would marginalize out all distances and the Hubble Constant, and do a likelihood test (or equivalent, like, say, cross-validation). The two hypotheses would be a gas of galaxies with finite velocity dispersion but a well-defined mean rest-frame velocity through which we are moving (at unknown speed) <i>vs</i> an expanding gas, again with the finite velocity dispersion and through which we are moving. I think the test is very easy to set up and the problem is very easy to solve, and would demonstrate that it only takes a handful of velocities (but very good sky coverage!) to demonstrate expansion.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-40282623979630881142016-01-04T15:05:00.001-05:002016-01-04T15:05:07.762-05:00use exoplanetary transits to infer the Kepler PSF<p>
Like many astronomical missions and data sets, the NASA <i>Kepler</i> satellite imaging is crowded; there is no part of the imaging that contains reliably isolated stars. For this reason, it is hard to infer (from the data) the point-spread function of the instrument simply; any PSF inference requires modeling the images as a crowded image of many overlapping stars (of unknown brightnesses and positions). However, when a star is subject to a planetary or stellar transit, the <i>change</i> in the scene during the transit should be modeled well as a PSF-shaped deficit. So we should be able to infer the PSF from these deficits. The transits are rare, so they rarely overlap. They are also very faint (low-amplitude) events, but <i>(a)</i> the eclipsing-binary stellar transits are not so faint (and very common), and <i>(b)</i> <i>Kepler</i> has good signal-to-noise even on planetary transits.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-91958638038418051512015-01-10T02:31:00.000-05:002015-01-10T02:33:18.723-05:00video magnification on eta Carinae<p>We should apply the <a href="http://people.csail.mit.edu/mrub/vidmag/">video magnification</a> methods of Bill Freeman (MIT) to the spectacular <a href="http://etacar.umn.edu/">HST images</a> of expanding star eta Carinae. The motion in the HST images is a bit subtle, but it sure wouldn't be if we fired up the Freeman tricks. One of the cleverest of his tricks is to take the Fourier Transform of each image and then magnify the phase differences. This ensures that the thing being magnified is global and smooth, in some sense. The one respect in which Freeman's methods would have to be generalized is that the data are not from a single, video source: The different images were taken through different filters on different cameras and with different exposure times.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-36080749065816129602014-07-10T01:11:00.000-04:002014-07-10T01:11:39.225-04:00predict spectra using photometry<p>The <i>SDSS</i> spectra can be thought of as "labels" for objects detected in the imaging, each of which has <i>ugriz</i> photometry and some shape and position parameters. Can we train a model with this enormous amount of data to predict the spectra using the photometry? One thing that says "yes" is that photometric redshifts (for galaxies and quasars), photometric distances (for stars), and photometric temperatures and metallicities (for stars) all work well. One thing that says "no" is that there is far more information (in a technical sense) in the spectra than in the photometry. All this said, it is an absolutely great "Data Science" demonstration project, and it might create some new ideas for <i>LSST</i>-era astrophysics projects. In principle, it will also get us predictions about the spectral types and redshifts of many objects that lack spectra!</p>
Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com2tag:blogger.com,1999:blog-1074502515186128083.post-8193041306267373392014-06-08T13:42:00.001-04:002014-06-08T13:43:53.993-04:00forward scattering by dust<p>Is the Universe transparent? This issue has been interesting me for many years; the answer is "yes" of course, but just <i>how</i> transparent? There are many ways to look at transparency, but a <a href="http://hoggresearch.blogspot.com/2014/06/time-series-x-ray-scattering.html">talk by Corrales last week</a> got me thinking about it again: Take some point sources, regress their images against brightness, color, and line-of-sight dust amplitude. Do you see a scattering halo that is correlated with dust? In the optical and UV, I think you won't, based on unpublished work I have done previously. However, there might be a <i>tiny</i> signal. Also, there is more likely to be an effect in the x-ray, which (with <i>Chandra</i>) is accessible. And there is abundant archival data for this project in every waveband from infrared to X-ray.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-53514148924349664922013-11-16T12:50:00.000-05:002013-11-17T15:04:00.004-05:00extract low-resolution spectra from diffraction spikes<p>In imaging from a telescope with a secondary on a spider (for example, in <i>HST</i> imaging), bright stars show diffraction spikes. More generally, the outer parts of the point-spread function are related to the Fourier Transform of the small-scale features in the entrance aperture. The scale at which this Fourier Transform imprints on the focal plane is linearly related to wavelength (just as the angular size of the diffraction-limited PSF goes as wavelength over aperture).</p><p>This means that the diffraction spikes coming from stars <i>contain low-resolution spectra</i> of those stars! That is, you ought to be able to extract spectral information from the spikes. It won't be good, but it should permit measurements of colors or temperatures or SED slopes with even single-band imaging, and aid in star–quasar classification. Indeed, in HST press-release images, you can see that the diffraction spikes are little "rainbows" (see below).</p><p>The project is to take wide-band imaging from HST, in fields where stars have been measured either in multiple bands or else spectroscopically, and show that some of the scientific results could have been extracted from the single, wide band directly using the diffraction features.</p>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdbnDPIJUjuL2WPVU2yQKzJX1UbGxILPpcb4yeDV2Iverh7pGD7YTlN4tMz-4DC_aSI0kr4f7JX8qlKBN6P6Uv9v78Xt1CHpWtq5FlexOC8Nij3IMjbavqrA88isiKO4poNHUQEcRi8xo/s1600/Screen+Shot+2013-11-16+at+12.53.27+PM.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdbnDPIJUjuL2WPVU2yQKzJX1UbGxILPpcb4yeDV2Iverh7pGD7YTlN4tMz-4DC_aSI0kr4f7JX8qlKBN6P6Uv9v78Xt1CHpWtq5FlexOC8Nij3IMjbavqrA88isiKO4poNHUQEcRi8xo/s320/Screen+Shot+2013-11-16+at+12.53.27+PM.png" /></a>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-88640715895348789742013-10-13T14:50:00.001-04:002013-11-16T13:31:06.147-05:00measure the speed of light with Kepler<p>Okay this idea is <i>dumb</i> but I would love to see it done: As <i>Kepler</i> goes around the Sun (no, not the Earth, the Sun), it is sometimes flying <i>towards</i> its field and sometimes <i>away</i>. This leads to classical <a href="http://en.wikipedia.org/wiki/Aberration_of_light">stellar aberration</a> (discovered by Bradley in the 1700s; Bradley was a genious, IMHO), which leads to a beaming effect, in which the field-of-view (or plate scale) changes with the projection of the velocity vector onto the field-center pointing vector. A measurement of this would only take a day or two of hard work, and would provide a measure of the speed of light in units of the velocity of <i>Kepler</i> in its orbit.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com4tag:blogger.com,1999:blog-1074502515186128083.post-10199412204172447502013-06-22T13:52:00.000-04:002013-06-22T13:52:19.041-04:00Kepler as an insanely expensive thermometer!<p>The <i>Kepler</i> spacecraft is taking incredibly precise photometric data on tens of thousands of stars for the purpose of detecting exoplanets. For many reasons, the lightcurves it returns are sensitive to the temperature of the spacecraft: The focus and astrometric map (camera calibration) of the camera changes with temperature, and the detector noise properties might be evolving too. This wouldn't be a problem (it's a space mission) but the spacecraft changes its sun angle abruptly to perform high-gain data downlink about once per month, and the temperature recovery profile depends on the orientation of the spacecraft post-downlink. Instead, there are sub-percent-level traces of the temperature history imprinted on every lightcurve. Each lightcurve responds to temperature differently, but each is sensitive.</p><p>Of course the spacecraft keeps housekeeping data with temperature information, but it hasn't been extremely useful for calibration purposes. Why not? The onboard temperature sensors are low in signal-to-noise or dynamic range, whereas the lightcurves are good (sometimes) at the part-in-hundred-thousand level. That is, there is <i>far more temperature information</i> in the lightcurves than in the direct temperature data! Here's the project:</p><p>Treat the housekeeping data about temperature as providing <q>noisy labels</q> on the lightcurve data. Find the properties of each lightcurve that best predicts those labels. Combine information from many lightcurves to produce an extremely high signal-to-noise and precise temperature history for the spacecraft. Bonus points for constraining not just the temperature history but a thermal model too.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-57298382043950284152012-11-25T16:20:00.003-05:002012-11-25T16:20:39.096-05:00limits to ground-based photometry?<p>A conversation with Nick Suntzeff (TAMU) in Lawrence, KS, brought up the great idea (Nick's, not mine) to figure out <i>why ground-based photometry of stars never gets better than a few milli-mags</i> in precision. Seriously people, <i>Kepler</i> is at the part-per-million or better level. Why can't we do the same from the ground? Why not at least part-per-hundred-thousand? Is it something about the scintillation, the transparency, the point-spread function, the detector temperature, scattered light, sky emission, sky lines, what? Not sure how to proceed, but the project could make the next generation of projects <i>orders of magnitude</i> less expensive. I guess I would start by taking images of a star field with many different (very different) exposure times and at different twilight levels (Suntzeff's idea again). Could it be that all we need is better software?</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-52566589598024017792012-10-24T21:28:00.000-04:002012-10-24T21:31:05.673-04:00find or rule out a periodic universe (via structure)<p>Questions from Kilian Walsh (NYU) today reminded me of an old, abandoned idea: Look for evidence of a periodic universe (topological non-triviality) in the large-scale structure of galaxies. Papers by Starkman (CWRU) and collaborators (one of several examples <a href="http://arxiv.org/abs/astro-ph/0310233">is here</a>) claim to rule out most interesting topologies using the CMB alone. I don't doubt these papers but <i>(a)</i> they effectively make very strong predictions for the large-scale structure and <i>(b)</i> if CMB (or topology) theory is messed up, maybe the constraints are over-interpreted.</p><p>The idea would be to take pairs of finite patches of the observed large-scale structure and look to see if there are shifts, rotations, and linear amplifications (to account for growth and bias evolution) that make their long-wavelength (low-pass filtered) density fields match. Density field tracers include the LRGs, the Lyman-alpha forest, and quasars. You need to use (relatively) high-redshift tracers if you want to test conceivably relevant topologies.</p><p>Presumably all results would be negative; that's fine. But one nice side effect would be to find structures (for example clusters of galaxies) residing in very similar environments, and by <q>similar</q> I mean in terms of full three dimensional structure, not just mean density on some scale. That could be useful for testing non-linear growth of structure.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-10036566682814038542012-10-21T21:21:00.003-04:002012-10-21T21:22:11.553-04:00find LRG-LRG double redshifts<p>Vivi Tsalmantza and I have found many double redshift in the <i>SDSS</i> spectroscopy (a few examples are <a href="http://arxiv.org/abs/1201.3370">published here</a> but we have many others) by modeling quasars and galaxies with a data-driven model and then fitting new data with a mixture of two things at different redshifts. We have found that finding such things is straightforward. We have also found that among all galaxies, luminous red galaxies are the easiest to model (that's no breakthrough; it has been known for a <a href="http://arxiv.org/abs/astro-ph/0212087">long time</a>).</p><p>Put these two ideas together and what have you got? An incredibly simple way to find double-redshifts of massive galaxies in spectroscopy. And the objects you find would be interesting: Rarely have double redshifts been found without emission lines (LRG spectra are almost purely stellar with no nebular lines), and because the LRGs sometimes host radio sources you might even get a Hubble-constant-measuring <q>golden lens</q>. For someone who knows what a spectrum is, this project is one week of coding and three weeks of CPU crushing. For someone who doesn't, it is a great learning project. If you get started, email me, because I would love to facilitate this one! I will happily provide consultation and CPU time.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-70132145941318568122012-10-10T23:48:00.000-04:002012-10-10T23:48:27.589-04:00find or rule out ram pressure stripping in galaxy clusters<p>We know a lot about the <i>scalar</i> properties of galaxies as a function of clustocentric distance: Galaxies near cluster centers tend to be redder and older and more massive and more dense than galaxies far from cluster centers. We also know a lot about the <i>tensor</i> properties of galaxies as a function of clustocentric distance: Background galaxies tend to be tangentially sheared and galaxies in or near the cluster have some fairly well-studied but extremely weak alignment effects. What about <i>vector</i> properties?</p><p>Way back in the day, star NYU undergrad Alex Quintero (now at Scripps doing oceanography, I think) and I looked at the morphologies of galaxies as a function of clustocentric position, with the hopes of finding offsets between blue and red light (say) in the direction of the cluster center. These are generically predicted if ram-pressure stripping or any other pressure effects are acting in the cluster or infall-region environments. We developed some <i>incredibly</i> sensitive tests, found nothing, and failed to publish (yes I know, I know).</p><p>This is worth finishing and publishing, and I would be happy to share all our secrets. It would also be worth doing some theory or simulations or interrogating some existing simulations to see more precisely what is expected. I think you can probably rule out ram-pressure stripping as a generic influence on cluster members, although maybe the simulations would say you don't expect a thing. By the way, offsets between 21-cm and optical are even <i>more</i> interesting, because they are seen in some cases, and are more directly relevant to the question. However, it is a bit harder to assemble the unbiased data you need to perform a sensitive experiment.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-5689674343819788832012-10-09T21:50:00.003-04:002012-10-09T21:53:15.886-04:00cosmology with finite-range gravity<p>Although the Nobel Prize last year went for the accelerated expansion of the Universe, in fact <i>acceleration</i> is not a many-sigma result. What <i>is</i> a many-sigma result is that the expansion is <i>not decelerating</i> by as much as it should be given the mass density. This begs the question: Could gravity be weaker than expected on cosmological scales? Models with, say, an exponential cutoff of the gravitational force law at long distances are theoretically ugly (they are like massive graviton theories and usually associated with various pathologies) but as <i>empirical objects</i> they are nice: A model with an exponentially suppressed force law at large distance is predictive and simple.</p><p>The idea is to compute the detailed expansion history and linear growth factor (for structure formation) for a homogeneous and isotropic universe and compare to existing data. By how much is this ruled out relative to a cosmological-constant model? The answer may be <q>a lot</q> but if it is only by a few sigma, then I think it would be an interesting straw-man. For one, it has the same number of free parameters (one length scale instead of one cosmological constant). For two, it would sharpen up the empirical basis for acceleration. For three, it would exercise an idea I would like to promote: Let's choose models on the joint basis of theoretical reasonableness and computability, not theoretical reasonableness <i>alone</i>! If we had spent the history of physics with theoretical niceness as our top priority, we would never have got the Bohr atom or quantum mechanics!</p><p>One amusing note is that if gravity <i>does</i> cut off at large scales, then in the very distant future, the Universe will evolve into an inhomogeneous fractal. Fractal-like inhomogeneity is something <a href="http://arxiv.org/abs/astro-ph/0411197">I have argued against</a> for the present-day Universe.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com2tag:blogger.com,1999:blog-1074502515186128083.post-4565906006292401072012-10-06T14:17:00.001-04:002012-10-06T14:17:35.972-04:00cosmological simulation as deconvolution<p>After a talk by Matias Zaldarriaga (IAS) about making simulations faster, I had the following possibly stupid idea: It is possible to speed up simulations of cosmological structure formation by simulating not the full growth of structure, but just the <i>departures</i> away from a linear or quadratic approximation to that growth. As structure grows, smooth initial conditions condense into very high-resolution and informative structure. First observation: That growth looks like some kind of deconvolution. Second: The better you can approximate it with fast tools, the faster you can simulate (in principle) the departures or errors in the approximation. So let's fire up some machine learning!</p><p>The idea is to take the initial conditions, the result of linear perturbation theory, the result of second-order perturbation theory, and a full-up simulation, and try to infer each thing from the other (with some flexible model, like a huge, sparse linear model, or some mixture of linear models or somesuch). Train up and see if we can beat other kinds of approximations in speed or accuracy. Then see if we can use it as a basis for speeding full-precision simulations. <i>Warning:</i> If you don't do this carefully, you might end up <i>learning something about gravitational collapse in the Universe!</i>. My advice, if you want to get started, is to ask Zaldarriaga for the inputs and outputs he used, because he is sitting on the ideal training sets for this, and may be willing to share.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-7689364298149411282012-10-03T11:50:00.000-04:002012-10-03T11:50:15.495-04:00compare EM to other optimization algorithms<p>For many problems, the computer scientists tell us to use expectation maximization. For example, in fitting a distribution with a mixture of Gaussians, EM is the bee's knees, apparently. This surprises me, because the EM optimization is so slow and predictable; I am guessing that a more aggressive optimization might beat it. Of course a more aggressive optimization might not be protected by the same guarantees as EM (which is super stable, even in high dimensions). It would be a service to humanity to investigate this and report places where EM can be beat. Of course this may all have been done; I would ask my local experts before embarking.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com3tag:blogger.com,1999:blog-1074502515186128083.post-67089949386226654332012-09-19T20:21:00.002-04:002012-09-19T20:21:36.051-04:00Can you fix charge-transfer inefficiency without a theory-driven model?<p>The <i>Gaia</i> mission needs to centroid stars with accuracies at the 10<sup>-3</sup>-pixel level. At the same time, the detector will be affected by charge-transfer inefficiency degradation as the instrument is battered by cosmic radiation; this causes significant magnitude-dependent centroid shifts. The team has been showing that with reasonable models of charge-transfer inefficiency, they can reach their scientific goals. One question I am interested in—a boring but very important question—is whether it is possible to figure out and fix the CTI issues <i>without</i> a good model up-front. (I am anticipating that the model won't be accurate, although the team <i>is</i> analyzing lab CCDs subject to sensible, realistic damage.) The shape and magnitude of the effects on the point-spread function and positional offsets will be a function of stellar magnitude (brightness) and position on the chip. They might also have something to do with what stars have crossed the chip in advance of the current star. The idea is to build a non-trivial fake data stream and then analyze it without knowing what was put in: Can you recover and model all the effects at sufficient precision after learning the time-evolving non-trivial model on the science data themselves? The answer—which I expect to be <q>yes</q>—has implications for <i>Gaia</i> and every precision experiment to follow.</p><p>In order to work on such subjects I built a one-dimensional (yes the sky is a circle, not a 2-sphere) <a href="https://github.com/davidwhogg/Gaia"><i>Gaia</i> simulator</a>. It currently doesn't do what is needed, so fork it and start coding! Or build your own. Or get serious and make a full mission simulator. But my point is not <q>Will <i>Gaia</i> work?</q> it is <q>Can we make <i>Gaia</i> analysis less dependent on mechanistic CCD models?</q> In the process we might make it more precise overall. Enhanced goal: Analyze all of <i>Gaia</i>'s mission choices with the model.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com1tag:blogger.com,1999:blog-1074502515186128083.post-4497747844997209712012-09-16T22:39:00.002-04:002012-09-16T22:39:48.308-04:00scientific reproducibility police<p>At coffee this morning, Christopher Stumm (Etsy), Dan Foreman-Mackey (NYU), and I worked up the following idea of Stumm's: Every week, on a blog or (I prefer) in a short arXiv-only white paper, one refereed paper is taken from the scientific literature and its results are reproduced, as well as possible, given the content of the paper and the available data. I expect almost every paper to fail (that is, not be reproducible), of course, because almost every paper contains proprietary code or data or else is too vague to specify what was done. The astronomical literature is particularly interesting for this because many papers are based on public data; for those it comes down only to code and procedures; indeed I remember Bob Hanisch (STScI) giving a talk at <i>ADASS</i> showing that it is very hard to reproduce the results of typical papers based on <i>HST</i> data, despite the fact that all the data and almost all the code people use on them are public.</p><p>Stumm, Foreman-Mackey, and I discussed economic models and incentive models to make this happen. I think whoever did this would succeed scientifically, if he or she did it well, both because it would have huge impact and because it would create many new insights. But on the other hand it would take significant guts and a hell of a lot of time. If you want to do it, sign me up as one of your reproducibility agents! I think anyone involved would learn a huge amount about the science (more than they learn about reproducibility). In the end, it is the community that would benefit most, though. Radical!</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com3tag:blogger.com,1999:blog-1074502515186128083.post-78323579229154721692012-09-15T10:39:00.000-04:002012-09-15T10:39:03.748-04:00standards for point-spread-function meta data<p>When we share astronomical images, we expect the images to have standards-compliant descriptions of their astrometric calibration—the mapping between image position and sky position—in their headers. Naturally, it is just as important to have descriptions of the point-spread-function, for almost any astronomical activity (like photometry, source matching, or color measurement). And yet we have no standards. (Even the WCS standard for astrometry is seriously out of date). Develop a PSF standard!</p><p>Requirements include: It should be very flexible. It should permit variations of the PSF with position in the image. It should have a specified relationship between the stellar position and the position of the mean, median, or mode of the PSF itself. That latter point relates to the fact that astrometric distortions can be sucked up into PSF variations if you permit the mode of the PSF to drift relative to the star postion. I like that freedom, but whether you permit it or not it should be explicit.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-56616450006306371522012-09-12T15:18:00.003-04:002012-09-12T15:18:34.975-04:00impute missing data in spectra<p>Let me say at the outset that I <i>don't</i> think that imputing missing data is a good idea in general. However, missing-data imputation is a form of cross-validation that provides a very good test of models or methods. My suggestion would be to take a large number of spectra (say stars or galaxies in SDSS), censor patches (multi-pixel segments) of them randomly, saving the censored patches. Build data-driven models using the uncensored data by means of PCA, <a href="http://arxiv.org/abs/1201.3370">HMF</a>, mixture-of-Gaussians EM, and <a href="http://code.google.com/p/extreme-deconvolution/">XD</a>, at different levels of complexity (different numbers of components). Compare in their ability to reconstruct the censored data. Then use the best of the methods as your spectral models for, for example, redshift identification! Now that I type that I realize the best target data are the LRGs in <i>SDSS-III BOSS</i>, where the (low) redshift failure rate could be pushed lower with a better model. Advanced goal: Go hierarchical and infer/understand priors too.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-3538479514099056562012-09-11T20:27:00.001-04:002012-09-11T20:27:08.920-04:00galaxy photometric redshifts with XD<p>Data-driven models tend to be very naive about noise. Jo Bovy (IAS) built a great data-driven model of the quasar population that makes use of our highly vetted photometric noise model, to produce the <a href="http://arxiv.org/abs/1105.3975">best-performing photometric redshift system</a> for quasars (that I know). This has been a great success of Bovy's <a href="http://code.google.com/p/extreme-deconvolution/">extreme deconvolution (XD)</a> hierarchical distribution modeling code. Let's do this again but for galaxies!</p><p>We know more about galaxies than we do quasars—so maybe a data-driven model doesn't make much sense—but we also know that data-driven models (even ones that don't take account of the noise) perform comparably well to theory-driven models, when it comes to galaxy photometric redshift prediction. So a data-driven model that takes account of the noise might kick ass. This was strongly recommended to me by Emmanuel Bertin (IAP). In other news, Bernhard Schölkopf (MPI-IS) opined to me that it might be the <i>causal</i> nature of the XD model that makes it so effective. I guess that's a <i>non-sequitur</i>.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-35706955857557969092012-09-10T09:27:00.001-04:002012-09-10T09:28:00.356-04:00de-blur long exposures that show the rotation of the sky<p>Here at <a href="http://astrometry.net/"><i>Astrometry.net</i></a> headquarters we get a lot of images of the night sky where the exposure is long and the stars have trailed into partial circular arcs. If we could <q>de-blur</q> these into images of the sky, this would be great: Every one of these trailed images would provide a photometric measurement of every star. Advanced goal: Every one of these trailed images would provide a photometric <i>light curve</i> of every star. That would be sweet! Not sure if this is really <i>research</i>, but it would be cool.</p><p>The problem is easy, because every star traverses the same angle in a circle with the same center. Easy! But the problem is hard because the images are generally taken with cameras that have substantial field distortions (distortions in the focal plane away from a pure tangent-plane projection of the sky). Still, it seems totally do-able!</p><p>Pedants beware: Of course I know that it is the <i>Earth</i> rotating and not the <i>sky</i> rotating! But yes, I have made that pedantic point on occasion too.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0tag:blogger.com,1999:blog-1074502515186128083.post-46806889009473639672012-09-07T06:00:00.000-04:002012-09-07T06:00:03.639-04:00design strategy for vector and tensor calibration<p>In <a href="http://arxiv.org/abs/1203.6255">Holmes <i>et al</i> 2012</a> (new version coming soon) we showed practical methods for designing an imaging survey for high-quality photometric calibration: You don't need a separate calibration program (separate from the main science program) if you design it our way. This is like a <q>scalar</q> calibration: We are asking <q>What is the sensitivity at every location in the focal plane?</q> We could have asked <q>What is the astrometric distortion away from a tangent-plane at every location in the focal plane?</q>, which is a <i>vector</i> calibration question, or we could have asked <q>What is the point-spread function at every location in the focal plane?</q>, which is a <i>tensor</i> calibration question. Of course the astrometry and PSF vary with time in ground-based surveys, but for space-based surveys these are relevant self-calibration questions. We learned in the above-cited paper that certain kinds of redundancy and non-redundancy make scalar calibration work, but the requirements will go up as the rank of the calibration goes up too. So repeat for these higher-order calibrations! Whatever you do might be highly relevant for <i>Euclid</i> or <i>WFIRST</i>, which both depend crucially on the ability to calibrate precisely. Even ground-based surveys, though dominated by atmospheric effects, might have fixed distortions in the WCS and PSF that a good survey strategy could uncover better than any separate calibration program.</p>Hogghttp://www.blogger.com/profile/18398397408280534592noreply@blogger.com0