Colloquium




Abstract
 
Large-scale simulation plays all sorts of roles in science and engineering, including design evaluation, what-if contingency testing, and driving simulation-driven optimization. All such simulations consist in concatenating a large number of approximations of the underlying system of PDEs. Assessing the cumulative effect of these approximations is the verification problem: how close does the simulator come to solving the PDEs? Naturally, this question is most interesting when the simulator is really needed, i.e. when the true solution is unknown!

This talk describes joint work with Tanya Vdovina and Igor Terentyev, We have collaborated with an oil and gas industry consortium, whose mission is to create a sequence of simulated 3D seismic data sets. These data are intended to be used to assess the performance of commercial processing software and to test new imaging algorithms, so the consortium places a premium on accuracy. Our role is to build a public-domain benchmark simulator, whose output will be compared to that of the commercial vendors who do most of the actual simulation. Thus verification of our benchmark code became a central issue. The key lessons that we have learned so far are that (1) the few verification tools available to us - a few analytic solutions and Richardson extrapolation - seem to do the job, at least in a rough way, and (2) the standard approach to this type of simulation - finite difference methods on regular grids - is not very accurate for the tasks of the type defined by this project. I'll end by describing some possible avenues for improvement of seismic simulation technology.



For future talks or to be added to the mailing list: www.math.uh.edu/colloquium