filmov
tv
2017 EuroLLVM Developers’ Meeting: A. Pilipenko “Expressing high level optimizations within LLVM”
![preview_player](https://i.ytimg.com/vi/sKIRIilZDnE/maxresdefault.jpg)
Показать описание
—
Expressing high level optimizations within LLVM - Artur Pilipenko, Azul Systems
—
At Azul we are building a production quality, state of the art LLVM based JIT compiler for Java. Originally targeted for C and C++, the LLVM IR is a rather low-level representation, which makes it challenging to represent and utilize high level Java semantics in the optimizer. One of the approaches is to perform all the high-level transformations over another IR before lowering the code to the LLVM IR, like it is done in the Swift compiler. However, this involves building a new IR and related infrastructure. In our compiler we have opted to express all the information we need in the LLVM IR instead. In this talk we will outline the embedded high level IR which enables us to perform high level Java specific optimizations over the LLVM IR. We will show the optimizations based on top of it and discuss some pros and cons of the approach we chose.
The java type framework is the core of the system we built. It allows us to express the information about java types of the objects referenced by pointer values. One of the sources of this information is the bytecode. Our frontend uses metadata and attributes to annotate the IR with the types known from the bytecode. On the optimizer side we have a type inference analysis which computes the type for any given value using frontend generated facts and other information, like type checks in the code. This analysis is used by Java-specific optimizations, like devirtualization and simplification of type checks. We also taught some of the existing LLVM analyses and passes to take Java type information into account. For example, we use the java type of the pointer to infer the dereferenceability and aliasing properties of the pointer. We made inline cost analysis more accurate in the presence of java type based optimizations. We will discuss the optimizations we built on top of the java type framework and will show how the existing optimizations interact with it. Some parts of the system we built can be useful for others, so we would like to start the discussion about upstreaming some of the parts.
—
Expressing high level optimizations within LLVM - Artur Pilipenko, Azul Systems
—
At Azul we are building a production quality, state of the art LLVM based JIT compiler for Java. Originally targeted for C and C++, the LLVM IR is a rather low-level representation, which makes it challenging to represent and utilize high level Java semantics in the optimizer. One of the approaches is to perform all the high-level transformations over another IR before lowering the code to the LLVM IR, like it is done in the Swift compiler. However, this involves building a new IR and related infrastructure. In our compiler we have opted to express all the information we need in the LLVM IR instead. In this talk we will outline the embedded high level IR which enables us to perform high level Java specific optimizations over the LLVM IR. We will show the optimizations based on top of it and discuss some pros and cons of the approach we chose.
The java type framework is the core of the system we built. It allows us to express the information about java types of the objects referenced by pointer values. One of the sources of this information is the bytecode. Our frontend uses metadata and attributes to annotate the IR with the types known from the bytecode. On the optimizer side we have a type inference analysis which computes the type for any given value using frontend generated facts and other information, like type checks in the code. This analysis is used by Java-specific optimizations, like devirtualization and simplification of type checks. We also taught some of the existing LLVM analyses and passes to take Java type information into account. For example, we use the java type of the pointer to infer the dereferenceability and aliasing properties of the pointer. We made inline cost analysis more accurate in the presence of java type based optimizations. We will discuss the optimizations we built on top of the java type framework and will show how the existing optimizations interact with it. Some parts of the system we built can be useful for others, so we would like to start the discussion about upstreaming some of the parts.
—