Compilers are programming language translators: in the simplest terms, just as humans translate and understand natural languages, compilers translate programming languages to instructions that hardware can understand and execute. Polymage Labs offers both products and services to build compilers for programming languages and models used in the field of Artificial Intelligence and Machine learning computations.
PolyMage Labs is a deep-tech software startup specialized in the field of high-performance compiler and code generation systems for Artificial Intelligence computations. Compilers are programming language translators: in the simplest terms, just as humans translate and understand natural languages, compilers translate programming languages to instructions that hardware can understand and execute. Polymage Labs offers both products and services to build compilers for programming languages and models used in the field of Artificial Intelligence and Machine learning computations. Its target customer case is either businesses building new hardware to accelerate AI, businesses building and relying on algorithms from the field of AI with the need to execute them faster, and businesses providing services (cloud computing providers) that in turn provide a platform to run AI computations fast for their users.
Polymage Labs was set up with multiple objectives: (1) to translate technology from advanced research in compilers and automatic code generators to help the industry solve complex programming challenges in the AI software development domain, (2) to set up world-class expertise in automatic code generation and high-performance AI systems in India.
Most of the technology building and founding at Polymage Labs towards its current goals started in May 2019. We were able to acquire our first customer soon. (Please see the explainer video linked above for a testimonial.) The team had only me in the first 9-12 months, then about three members in the next six months, and it has now grown to a strong eight. A note on our journey is towards the end of the first video highlighting the challenges involved in founding a deep-tech startup in the computer systems area.
PolyBlocks are compiler building blocks being developed by PolyMage: they allow rapid creation of new compilers and code generators for several domains served by dense tensor computations, including deep learning, image processing pipelines, and stencil computations used in science and engineering. These building blocks are in the form of MLIR operations (explained further below) and their transformation utilities. Highly optimized code for these operations is generated using a number of advanced techniques from research. The same building blocks are meant to be reusable across a variety of programming models and target hardware.
Our USP is the polyhedral compiler technology that powers PolyBlocks. Please see the middle part of the explainer video for more details. This technology relies on expertise typically available only in academic research communities for a long time but not fully translated to a form suitable for widespread production use. This was changed when MLIR was created and open-sourced in Apr 2019. The founder of PolyMage Labs was also a founding team member of the MLIR project during his stint as a visiting researcher at Google in 2018.
MLIR stands for Multi-level Intermediate Representation, and the MLIR project is an open-source compiler infrastructure project. MLIR was announced and open-sourced by Google in Apr 2019 but is now a community-driven compiler infrastructure project that is part of the LLVM project. The MLIR project was initiated to deliver the next generation optimizing compiler infrastructure with a focus on serving the computational demands of AI and machine learning programming models. At Google itself, one of the project's goals was to address the compiler challenges associated with the TensorFlow ecosystem. MLIR is a new intermediate representation designed to provide a unified, modular, and extensible infrastructure to progressively lower dataflow compute graphs, through loop nests potentially, to high-performance target-specific code. MLIR shares similarities with traditional control flow graph-based three-address static single assignment (SSA) representations (including LLVM IR or Swift intermediate language) but also introduces notions from the polyhedral compiler framework as first-class concepts to allow powerful analysis and transformation in the presence of loop nests and multi-dimensional arrays. MLIR supports multiple front- and back-ends and uses LLVM IR as one of its primary code generation targets. It is thus a very useful infrastructure for developing new compilers, especially to solve the compilation challenges involved in targeting emerging AI and machine learning programming languages/models to the plethora of specialized accelerator chips.
All of PolyMage Labs' technology is based on (i.e., built on top of) the MLIR infrastructure. This also allows us to benefit from and contribute back to the open-source community. We believe that certain parts of the infrastructure can only thrive in the open by being readily available for reuse by all stakeholders.
I am listing the key ones from my perspective. While the first three are commonly listed, the fourth one is key as well.
1) Innovations in computer hardware towards massive parallelism as well as custom specialized accelerator units on chips for computations used in the above domains;
2) The generation and availability of data which has been, in turn, due to widespread use of information technology, smartphones, and data centres,
3) Innovation and development of programming models, libraries, packages, compilers, code generation tools, visualizers, and the surrounding software ecosystem that aid software development in these fields; (this resulted from both (1) and (2) and was partly an effect as opposed to a cause),
4) The underlying computation patterns that appear in several recent successful areas of AI (in particular, deep learning) are simple, regular, easy to optimize, and already widely studied/optimized: they lend themselves to effortless acceleration on parallel hardware. They are typically dominated by matrix-matrix multiplication (matmul) or matmul-like patterns or other fully "data-parallel" (i.e., can be executed in parallel on different data elements) computations. Designing new hardware and software to make them execute even faster also becomes easier. This has led to a self-reinforcing feedback cycle.
In general, the ML/AI/Data analytics systems industry can be broadly classified into software and hardware. The challenge faced by the hardware industry is making their chips easily usable by programmers. This is one of the areas where Polymage invents, innovates, and solves hard problems. On the other hand, the ML/AI systems software industry is grappling with building and maintaining high-performance software and constantly adapting them to the evolving hardware. A lot of human expert effort is involved here to write, rewrite, re-optimize, and retune for newer generations of hardware. Providing the right tools, programming models, libraries, and automation is key here, and this is the second target audience for Polymage Labs' technology.
PolyBlocks. Please see the videos. Our automatic code generation systems, in several cases, are competitive with the best expert hand-tuned code. In several cases, they deliver improvements ranging from 5x – 25x over the state-of-the-art programming systems that are being widely used.
Building an AI software "systems" company requires a very different talent pool than the one needed for an AI applications company. The former requires strong computer science skills, and we often find the quality of CS engineering education in India not up to the mark here. So, the right kind of technology talent is one of the biggest challenges for the deep tech software systems industry in India. At Polymage Labs, we spend a significant amount of time onboarding and training our engineers, and they are able to acquire these skills over some time.
We have been growing rapidly over the last 1.5 years, and the following year will be crucial. Technology building in this area is a multi-year effort. We plan to build more of the technology for PolyBlocks in the next year and move towards an increasingly scalable business. The field of AI software and compiler systems in our space has to always be a combination of products and services with often a substantial service component. With more stability and maturity of the technology, we expect to expand the product component further in the coming year. We also plan to expand the scope of our projects with our existing customers significantly in the next year.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.