MLIR
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
概述
MLIR(多级中间表示)是编译器实用工具的表示格式和库,它位于模型表示与生成硬件特定代码的低级编译器/执行器之间。
MLIR 本质上是用于现代优化编译器的灵活基础架构。这意味着它由一个中间表示 (IR) 规范和一个用于对该表示执行转换的代码工具包组成。(用编译器领域的话来说,当您从高级表示转换为低级表示时,此类转换可以被称为“降级”。)
MLIR 深受 LLVM 的影响,并且明显重用了后者的许多绝佳创意。它有一个灵活的类型系统,并允许在同一编译单元中结合多个级别的抽象来表示、分析和转换计算图。这些抽象包括 TensorFlow 运算、嵌套多面体循环区域,甚至包括 LLVM 指令和固定硬件运算与类型。
我们希望 MLIR 能引起许多群体的关注,其中包括:
- 希望优化机器学习模型的性能与内存消耗的编译器研究员和实现者
- 正在想办法将自己的硬件(例如 TPU、手机中可移植的神经网络硬件以及其他自定义专用集成电路 (ASIC))连接至 TensorFlow 的硬件制造商
- 在编写语言绑定时想利用优化编译器和硬件加速的人士
TensorFlow 生态系统包含许多在软件和硬件堆栈的多个级别上运行的编译器和优化器。我们希望逐步采用 MLIR 来简化此堆栈的各个方面。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2022-06-07。
[null,null,["最后更新时间 (UTC):2022-06-07。"],[],[],null,["# MLIR\n\n\u003cbr /\u003e\n\nOverview\n--------\n\nMLIR, or Multi-Level Intermediate Representation, is a representation format\nand library of compiler utilities that sits between the model representation\nand low-level compilers/executors that generate hardware-specific code.\n\nMLIR is, at its heart, a flexible infrastructure for modern optimizing\ncompilers. This means it consists of a specification for intermediate\nrepresentations (IR) and a code toolkit to perform transformations on that\nrepresentation. (In compiler parlance, as you move from higher-level\nrepresentations to lower-level representations, these transformations can be\ncalled \"lowerings\")\n\nMLIR is highly influenced by [LLVM](https://llvm.org/) and unabashedly reuses\nmany great ideas from it. It has a flexible type system, and allows\nrepresenting, analyzing and transforming graphs combining multiple levels of\nabstraction in the same compilation unit. These abstractions include TensorFlow\noperations, nested polyhedral loop regions, and even LLVM instructions and fixed\nhardware operations and types.\n\nWe expect MLIR to be of interest to many groups, including:\n\n- Compiler researchers and implementers looking to optimize performance and memory consumption of machine learning models\n- Hardware makers looking for a way to connect their hardware to TensorFlow, such as TPUs, portable neural hardware in phones, and other custom ASICs\n- People writing language bindings that want to take advantage of optimizing compilers and hardware acceleration.\n\nThe TensorFlow ecosystem contains a number of compilers and optimizers that\noperate at multiple levels of the software and hardware stack. We expect the\ngradual adoption of MLIR to simplify every aspect of this stack."]]