The 2007 High Performance Computing & Simulation
(HPCS'07) Conference

June 4 - 6, 2007
Prague, Czech Republic

In conjunction with
The 21th EUROPEAN CONFERENCE ON MODELLING AND SIMULATION
(ECMS 2007)

Co-Sponsored by SCS-Europe, IEEE Germany, ASIM, EUROSIM, CASS, JSST, PTSK, TSS
In Cooperation with the IEEE Computer Society Technical Committee on Parallel Processing (TCPP) (Pending)


Tutorials

Tutorial I: High Performance Nonlinear Global Optimization Techniques and Applications
Mark Wachowiak html pdf notes bibiography

TUTORIAL I

High Performance Nonlinear Global Optimization Techniques and Applications

Mark Wachowiak

Department of Computer Science and Mathematics
Nipissing University, North Bay,
Canada

TUTORIAL DESCRIPTION

In 1995, in a seminal paper (R. B. Schnabel, "A View of the Limitations, Opportunities, and Challenges in Parallel Nonlinear Optimization", Parallel Computing, 21(6), 1995, pp. 875-905), three main aspects of high-performance and parallel global optimization were described: (1) Parallelizing the objective function calculation; (2) Parallelizing the underlying numerical libraries and kernels; and (3) Re-designing the algorithm for increased parallelism. The primary focus of the proposed tutorial is the third aspect, as well as completely new paradigms specifically designed for high-performance computation.

New applications of global optimization abound. For example, complex phenomena are often modeled as large systems of equations, and model parameters must be determined to correspond with experimental data. Global optimization is used to determine these parameters. Furthermore, many important engineering problems rely on simulation-based optimization, wherein the cost function itself is formed from the results of large simulation experiments. In these cases, closed-form derivatives of the objective function are generally not available, and are not easily computed. Therefore, new optimization paradigms must be considered to solve these problems.

TUTORIAL OUTLINE

The proposed tutorial would include the following topics:

  • Introduction to the optimization problem and a brief overview of its classical, serial solutions.

  • Parallel techniques in local and global optimization.
    - Fine-grained approaches: Parallelization of derivative computation and cost function computation.
    - Coarse-grained approaches: Searching different parts of the search space simultaneously.

  • The intrinsic parallelism of deterministic global methods, including DIRECT, branch and bound, and interval
    analysis.

  • Stochastic and computational intelligence methods, including simulated annealing, genetic algorithms,
    evolutionary computation, and a special emphasis on particle swarm optimization.

  • Emerging computer architectures for, and applications in, high-performance global optimization.
    - Simulation-based optimization, where derivative information is not available, and the cost of computing
    each objective function value is very high. Important applications include safety engineering and the design
    of materials. High-performance derivative-free optimization methods will be discussed.
    - High-performance computing approaches to determine the optimal parameters of mathematical models
    that provide the best fit between observed and estimated data. Calibrating a model to observed data
    generally improves the model’s predictive capabilities, and also provides a means for model verification and improvement.
    improvement.
    - Biomedical applications, particularly in imaging, computer-guided surgery and therapy, bioinformatics, and
    proteomics.

    TARGET AUDIENCE

    The target audience includes researchers, students, and practitioners who require optimization for solving large, complex problems. Specifically, those who work in simulation and modeling learn about optimization-based simulation, its implementation, and potential applications. Optimization for parameter estimation will also be discussed.

    REQUIRED BACKGROUND

    Although some mathematical background and some knowledge of parallel computing and computing algorithms is helpful, the tutorial will focus on applied concepts and parallelization techniques rather than on theory.

    DURATION

    About three hours would be needed for this tutorial. Given three hours, a tentative breakdown would be:

  • Hour 1 – The optimization problem and some traditional approaches. Applications to simulation and modeling, and medical imaging.
  • Hour 2 – Coarse-grained parallel optimization approaches, specifically geared towards optimization-based simulation and parameter estimation.
  • Hour 3 – Parallelization strategies and future applications.

    INSTRUCTOR BIOGRAPHY

    Dr. Mark Wachowiak is currently an Assistant Professor at Nipissing University in North Bay, Canada. He has worked as a Postdoctoral Fellow and Research Associate at Robarts Research Institute in London, Canada, where he helped plan and build a supercomputing facility in the Imaging Laboratories. He also held an adjunct appointment in the Department of Medical Biophysics at the University of Western Ontario, London, Canada. He obtained the Doctorate degree from the University of Louisville, USA, in 2002, and was awarded the Best Dissertation Award for his work in particle swarm optimization.

    Dr. Wachowiak’s research interests are high-performance computing and parallel algorithms in scientific computing, grid computing, biomedical applications including imaging, bioinformatics, proteomics, and systems biology, and parallel global optimization. His recent work has focused on parallel optimization techniques for medical image alignment. He has been an invited speaker at several high-performance computing conferences.

    Back to top