Workgroup Financial Mathematics

Breadcrumb Navigation


Introduction to High Performance Computing for Finance and Big Data using GPUs

Lead by Dr. Benedikt Wilbertz (main part) and Prof. Dr. Christian Fries

Schedule and Venue

February 23rd, 2017

Morning Session 1
8:30 - 10:00
Morning Session 2
10:30 - 12:00
Afternoon Session 1
14:00 - 15:30
Afternoon Session 2
16:00 - 17:30

Room B 121

February 24th, 2017

February 25th Exercise session and/or final Exam
9:00 - 12:00

Registration and Contact: To register email to:

Course Description

This course will give an introduction on parallel programming on general purpose graphics devices (GPGPU) using NVIDIAs CUDA architecture. GPGPUs differ from ordinary CPUs by their vast amount of (rather simple) processor cores and therefore allow, when all cores are utilized efficiently, to outperform ordinary CPUs by several orders of magnitude. We will start with a brief overview on the hardware design of CUDA devices and general aspects of multi-threading, before we discuss the generation of random numbers on parallel architectures in detail. In general there are two approaches for this problem: The batch approach, where the challenge lies in determining a sequence of seed values which can be processed within independent streams but still yield in total a series of independent random numbers, and the skip-ahead approach, which aims at modifying a random number algorithm such that it is possible to jump ahead in the original sequence of random numbers. We will then apply the above methods for the valuation of derivatives and develop an efficient and numerically stable scheme for Monte-Carlo simulation on GPU devices.

As as final topic we will give a short survey on another very popular application for GPU computing: Deep Learning for Neural Networks. We will discuss the basic computing problems for Neural Networks and give an overview over existing frameworks for state-of-the-art Deep Learning. On top of that we study a highly efficient GPU implementation for a popular word embedding algorithm based on Neural Networks.


  • Software Development Tools + Basic Cuda
  • Introduction to the linux systems in the computer room
  • Cuda SDK components (libraries, nvcc, nsight)
  • Cuda architecture (SIMT principle + memory design)
  • Basic cuda language extensions
  • Basic programming examples (vector addition in parallel etc)


  • Monte-Carlo Simulation and Applications in Finance
  • Principle of random number generation in parallel (Skip ahead vs batch approach)
  • Linear congruential random number generators
  • CURAND library and XORShift generators
  • Vector summation in parallel (reduction principle)
  • Concurrency and atomic operations


  • Monte-Carlo simulation framework for pricing options
  • Neural Networks and Applications in Big Data
  • Stochastic gradient algorithm for GPUs (HOGWILD)
  • Introduction to Cuda Deep Neural Networks library cuDNN
  • Overview on modern Deep Learning Frameworks
  • Java Bindings for Cuda


to be announced.

For whom is this course?

Target Participants: Master students of Mathematics or Business Mathematics.

Pre-requisites: Probability Theory, Finanzmathematik II (Stochastic Calculus).

Applicable credits:  Students will receive 3 ECTS Points upon successful participation that may be attributed to any one of the following modules: WP18/1 for students enrolled in the LMU Master Mathematics programme. WP20, WP22 or WP23 for students enrolled in the LMU Master Business Mathematics (Wirtschaftsmathematik) programme.


The course will also include exercise sessions in which you will work hands-on with implementations of the presented techniques and models. Active participation in the exercise sessions is strongly recommended.


The written exam is open-book, that is, all notes, books, solutions of exercises etc. may be used. Personal electronic devices of any kind are not allowed. To participate, please bring to the exam your ID card or passport and your student card. Please be on time.