A conversation with Nick Suntzeff (TAMU) in Lawrence, KS, brought up the great idea (Nick's, not mine) to figure out why ground-based photometry of stars never gets better than a few milli-mags in precision. Seriously people, Kepler is at the part-per-million or better level. Why can't we do the same from the ground? Why not at least part-per-hundred-thousand? Is it something about the scintillation, the transparency, the point-spread function, the detector temperature, scattered light, sky emission, sky lines, what? Not sure how to proceed, but the project could make the next generation of projects orders of magnitude less expensive. I guess I would start by taking images of a star field with many different (very different) exposure times and at different twilight levels (Suntzeff's idea again). Could it be that all we need is better software?