Por Martin Berger (University of Sussex & Montanarius Ltd).
Graphics Processing Units (GPUs) are the work-horses of high-performance computing. The acceleration they provide to applications compatible with their programming paradigm can surpass CPU performance by several orders of magnitude, as notably evidenced by the advancements in deep learning. A significant spectrum of applications, especially within automated reasoning ”like SAT/SMT solvers” has yet to reap the benefits of GPU acceleration. In this talk we discuss recent work that successfully implemented program synthesis on GPUs and used it to accelerate learning of logical specifications from examples. We conclude by mapping out a research programme to move more formal verification workloads to GPUs.
Bio: Martin Berger did his PhD in formal models for distributed systems at Imperial College. He's currently an associate professor in the Department of Informatics at the University of Sussex. He's also working as a verification consultant for the microprocessor industry, and is one of the maintainers of the official RISC-V instruction set architecture (https://github.com/riscv/sail-riscv). His research interests include: logic and verification, typing systems, process calculus, meta-programming, JIT compilers, instruction set simulators, program synthesis, algorithms, data structures and programming languages for GPU programming.