hckrnws
I had originally planned something like this for my PhD thesis, but found out that I'm in way over my head. So I scaled down my ambitions a little.
Array languages and SIMD are a match made in heaven. This should be the paradigm of choice for high-performance programming, but it's unfortunately pretty obscure.
> Array languages and SIMD are a match made in heaven. This should be the paradigm of choice for high-performance programming, but it's unfortunately pretty obscure.
Huh. I kinda figured the whole point of array programming languages was that the compiler doesn't have to guess which parts of the code are inherently parallel.
So as someone who is by no means an expert you are half right. The compiler doesn't have to guess what parts are parallel and it's very clear which ops are parallelisable but how you parallelise them is the name of the game.
So for example if you do a pattern of "do a small op to each part of a large block of data and then do another small op to each part of that block of data, etc" then at least in CPU SIMD (ex AVX) you end up memory bottlenecked.
However if you can do a bunch of ops on the same small blocks of data before moving on to the next blocks of data in your overall large block of data then said small blocks can fit inside the L1 cache (or in the registers directly) and that can run the CPU to it's absolute limit.
Hence it becomes a game of scheduling. You already know what you need to optimise but actually doing so gets really hard really fast. Albeit things like MLIR (which are still very new) are making this easy to approach.
> Hence it becomes a game of scheduling. You already know what you need to optimise but actually doing so gets really hard really fast.
This immediately makes me think of Halide, which was specifically invented to make this easier to do by decoupling the algorithm from the scheduler.
Kind of sad that it doesn't see to have caught on much.
Well it has actually. MLIR (being built by the LLVM team) is basically the next generation of LLVM and one of the MLIR tutorials is literally "write Halide".
Oh, that's cool! I hadn't looked into MLIR in any detail yet, thank you for pointing me in that direction
>it's unfortunately pretty obscure
NumPy is partly inspired by APL and descendants. One of the few places that programmers commonly get performance afforded by hardware!
Indeed. It's sad how GPU programming is mostly stuck in the dark ages of C/C++ (well, worse than C++, with proprietary mutually incompatible variants, and buggy sw stacks). We have Futhark at least...
Aaron Hsu has an APL compiler targeting GPU that gets tantalizing performance in machine learning: https://dl.acm.org/doi/10.1145/3589246.3595371
It's about the author's language named Apple[1], took a few seconds since Apple Array System unfortunately sounds like some MacOS framework.
For people prefer C-like syntax, there is ispc[2], which supports x86 AVX and ARM Neon programming via LLVM.
It does use arm64 and neon instructions, and talks a bit about macOS specific Accelarate.framework functions. not totally unrelated.
One of my blog posts was posted and got some comments. This is a more refined take in the vein of "C is Not Suited to SIMD"
Probably a more proper title is "type directed optimization for SIMD", I do not see it particularly useful for array languages at large (as many of them, such as APL and J are untyped by intentional choice).
Crafted by Rajat
Source Code