High Performance Computing for Finance and Big Data using GPUs
This course will give an introduction on parallel programming on general purpose graphics devices (GPGPU) using NVIDIAs CUDA architecture. GPGPUs differ from ordinary CPUs by their vast amount of (rather simple) processor cores and therefore allow, when all cores are utilized efficiently, to outperform ordinary CPUs by several orders of magnitude. We will start with a brief overview on the hardware design of CUDA devices and general aspects of multi-threading, before we discuss the generation of random numbers on parallel architectures in detail. In general there are two approaches for this problem: The batch approach, where the challenge lies in determining a sequence of seed values which can be processed within independent streams but still yield in total a series of independent random numbers, and the skip-ahead approach, which aims at modifying a random number algorithm such that it is possible to jump ahead in the original sequence of random numbers. We will then apply the above methods for the valuation of derivatives and develop an efficient and numerically stable scheme for Monte-Carlo simulation on GPU devices.
As as final topic we will give a short survey on another very popular application for GPU computing: Deep Learning for Neural Networks. We will discuss the basic computing problems for Neural Networks and give an overview over existing frameworks for state-of-the-art Deep Learning. On top of that we study a highly efficient GPU implementation for a popular word embedding algorithm based on Neural Networks.
- Thursday, February 23, 2017. 09:00 - 17:30
- Friday, February 24, 2017. 09:00 - 17:30
- Saturday, February 25th. 09:00 - 12:00
The workshop takes place at
quantLab - Room B 121
LMU Institute of Mathematics
A detailed location plan can be found here.
|Morning Session 1||9:00 - 10:30|
|Morning Session 2||11:00 - 12:30|
|Afternoon Session 1||14:00 - 15:30|
|Afternoon Session 2||16:00 - 17:30|
- Software Development Tools + Basic Cuda
- Introduction to the linux systems in the computer room
- Cuda SDK components (libraries, nvcc, nsight)
- Cuda architecture (SIMT principle + memory design)
- Basic cuda language extensions
- Basic programming examples (vector addition in parallel etc)
- Monte-Carlo Simulation and Applications in Finance
- Principle of random number generation in parallel (Skip ahead vs batch approach)
- Linear congruential random number generators
- CURAND library and XORShift generators
- Vector summation in parallel (reduction principle)
- Concurrency and atomic operations
- Monte-Carlo simulation framework for pricing options
- Neural Networks and Applications in Big Data
- Stochastic gradient algorithm for GPUs (HOGWILD)
- Introduction to Cuda Deep Neural Networks library cuDNN
- Overview on modern Deep Learning Frameworks
- Java Bindings for Cuda
Solid knowledge of C/C++, Basics in options pricing theory
Christian Fries is head of model development at DZ Bank’s risk control and Professor for Applied Mathematical Finance at Department of Mathematics, LMU Munich.
His current research interests are hybrid interest rate models, Monte Carlo methods, and valuation under funding and counterparty risk. His papers and lecture notes may be downloaded from http://www.christian-fries.de/finmath
He is the author of “Mathematical Finance: Theory, Modeling, Implementation”, Wiley, 2007 and runs www.finmath.net.
The payment of a workshop fee is required, according to the following table:
|Rate||Type of Participant|
Registration and Contact
The workshop will take place in a computer equipped room with limited places. To register send an email to: firstname.lastname@example.org