CAAM 519: Computational Science I
Fall 2016 Syllabus

Matthew G. Knepley
3021 Duncan Hall knepley@rice.edu

This course concerns scientific computation in a broad sense. We will use numerical libraries written in higher level languages, such as C, Fortran, and Python, to solve computational problems. Emphasis will be placed on computational decision making. For example, how should I choose which nonlinear solver to use? How should I evaluate its suitability once I have tried it? We will also cover basic techniques of algorithm design and implementation, project planning, source management, configuration and build tools, documentation, program construction, i/o, and visualization.

If you go on to career in scientific computing, you will very often be confronted with models, discretizations, numerical methods, and solution algorithms unfamiliar to you. Some you will research and understand completely, and some you will understand at a merely mechanical level. This course mirrors that experience. When you first did arthmetic, were you aware you were doing it in an algebraically closed field or why?

1 Course Information

Instructor Matthew G. Knepley
Class times11am to 11:50am on Monday, Wednesday, & Friday
Location Duncan Hall 1042

A course overview and grading policy are available in accordance with Rice academic policy.

2 Required and Recommended Reading

Class notes have been prepared for each lecture. Students should read the notes prior to attending class, but the lecture may deviate from the notes somewhat. The notes will be continually updated as the class progresses, so students should download them again at the beginning of each week. The lectures for each week of class are contained in a chapter. The weekly lectures will roughly follow the chapter sections.

The following books are foundational for the software construction and documentation will be attempt in this class. However we will not follow them in any way, and they are intended for reference and outside reading.

3 Schedule

3.1 Week 1: Computing Basics 8/22

3.1.1 Lecture 1: Scientific Computing

The paper is the perfect vehicle for mathematical communication, but not for communication of results in scientific computing. In this arena, it is the numerical library.

Online resources

3.1.2 Lecture 2: Git

We discuss distributed version control systems, and in particular Git.

Online resources

3.1.3 Lecture 3: Git Hands On

We develop a familiarity with Git, and go through common version control scenarios.

3.2 Week 2: Programming Basics 8/29

3.2.1 Lecture 4: C, Python

We discuss configure, compiling and linking C code, makefiles, Python, and using Valgrind for debugging.

Online resources

3.2.2 Lecture 5: Configure, Make

We discuss configure and makefiles.

3.2.3 Lecture 6: Valgrind and gdb

We discuss using Valgrind and gdb for debugging.

3.3 Week 3: Finding and Relating Information 9/5

3.3.1 Lecture 7: Self-Teaching

The class learns to ask all questions online immediately. Google Omnia Responsit.

Online resources

3.3.2 Lecture 8: TeX and LaTeX

We introduce LaTeX and its use in this course.

Online resources

3.3.3 Lecture 9: PETSc Introduction

Introduction to PETSc

Q & A

3.4 Week 4: PETSc Introduction 9/12

3.4.1 Lecture 10: Numerical Linear Algebra

We become acquainted with PETSc’s Vec and Mat.

3.4.2 Lecture 11: Debugging and Performance Monitoring

We learn how to use the debugger properly, and how to interpret a simple log summary.

3.4.3 Lecture 12: Short PETSc Examples for Self-Study

I will be out of town.

Online resources

3.5 Week 5: Parallelism 9/19

3.5.1 Lecture 13: Basics of MPI

Basics of MPI, and design of layering (Communicators and attributes), running in parallel

Online Resources

3.5.2 Lecture 14: Scaling

Definitions of strong and weak scaling. Design of tests. Discussion of what they measure.

3.5.3 Lecture 15: Heterogeneous Parallelism

Arguments for MPI+MPI, and parts of MPI-3 students should learn about.

3.6 Week 6: Data Layout and Discretization I 9/26

3.6.1 Lecture 16: Sparse Data

We look at data structures for sparse data, and especially for sparse matrices. We examine the COO, AIJ (looks like run length encoding for 0), and ELL.

Online Resources

3.6.2 Lecture 17: Generic Data Layout

We look at how PetscSection encodes a data layout, and give many examples.

3.6.3 Lecture 18: Structured Meshes

Explain DMDA

3.7 Week 7: Simple Finite Differences 10/3

3.7.1 Lecture 19: Residual Evaluation

Explain ex5 and ex19

3.7.2 Lecture 20: Checking the Solution

Proto-MMS, looking at pointwise convergence

3.7.3 Lecture 21: Hands On

We look at structured grid simulations

3.8 Week 8: Review 10/10

3.8.1 Lecture 22: Midterm Recess

I will be out of town

3.8.2 Lecture 23: Review Parallelism

I will be out of town

3.8.3 Lecture 24: Review PETSc

I will be out of town

3.9 Week 9: Simple Performance Models 10/17

3.9.1 Lecture 25: Basic Performance Modeling

We learn why performance modeling is necessary.

Online Resources

3.9.2 Lecture 26: The Roofline Model and Gaming the System

We learn about the popular Roofline model.

Online Resources

3.9.3 Lecture 27: Review Sparse Matrix-Vector Product (SpMV) operation

I am out of town.

3.10 Week 10: Data Layout and Discretization II 10/24

3.10.1 Lecture 28: Sparse Matrix-Vector Product (SpMV)

We understand how well SpMV can perform in theory.

Online Resources

3.10.2 Lecture 29: Unstructured Meshes

We discuss the basics of the Plex interface for unstructure meshes.

Online Resources

3.10.3 Lecture 30: Unstructured Data Layout

We learn about data layout for mesh data, geometry (several kinds, like Bezier), solution data, adjacency data, etc.

3.11 Week 11: Simple Finite Elements 10/31

3.11.1 Lecture 31: Examples of Unstructured Data Layout

We look use SNES ex12 to look at data layout for the doublet mesh and various discretizations and dimensions.

3.11.2 Lecture 32: Connecting data layouts

We look at deriving one data layout from another and discuss composable methods of parallel data distribution and redistribution.

Online Resources

3.11.3 Lecture 33: Computational Finite Elements

What pieces do we need for finite element approximation, and basic steps for finite element residual computation

3.12 Week 12: PETSc Linear Equation Solvers 11/7

We learn how to solve many equations at once, even if they are only linear.

3.12.1 Lecture 34: Computational Finite Elements II

More discussion of spaces and dual spaces.

3.12.2 Lecture 35: Direct Solvers

We look at LU, Cholesky, and Schur complements

3.12.3 Lecture 36: Krylov Methods

We explore Krylov solvers computationally. We discuss the main differences without proof. We look at the Nachitgal paper showing computationally that it is hard to pick a solver. We look at the Greenbaum paper showing that it is very hard to predict the behavior of GMRES.

Discussion: List the reasons that solving Navier-Stokes with a Krylov method is a bad idea.

Online Resources

3.13 Week 13: Advanced Equation Solvers 11/14

3.13.1 Lecture 37: Multigrid

Basic computational steps in MG, and hand-waving explanation for its effectiveness. Distinction between classical MG and FAS. Simple proof using Taylor series from Barry.

3.13.2 Lecture 38: Multigrid Hands On

3.13.3 Lecture 39: Multigrid Theory

3.14 Week 14: PETSc Nonlinear Equation Solvers 11/21

3.14.1 Lecture 40: Single-step Nonlinear Solvers

We learn how to solve systems of nonlinear equations. Nonlinear Richarson, Newton, Picard, Nonlinear Conjugate Gradients, etc.

3.14.2 Lecture 41: Review running PETSc Newton solver

I am out of town.

3.14.3 Lecture 42: Thanksgiving Break

No class

3.15 Week 15: Preconditioning 11/28

3.15.1 Lecture 43: Nonlinear Preconditioning

Composition of nonlinear solvers is defined, and we explore its use in model problems from the paper.

Online Resources

3.15.2 Lecture 44: Block Preconditioners and Composability

We show the simplicity and power of the PCFIELDSPLIT interface, arising from composability. We show examples of robust solvers constructed by composing existing strategies.

3.15.3 Lecture 45: Course Review

We recapitulate the course