Description
Currently the backend chooses how arrays are implemented, and the decision when to use a descriptor and when just a pointer is complex (and getting more complicated as we develop):
If we can't determine this robustly ourselves, we might need to annotate the array in the source code, perhaps change i8[64] to NoDescriptor[i8[64]] or something like that, then set this in ASR.
If the decision is this complex, then I think it should be part of ASR itself and the optimizer will choose, or even the frontend (or both).
There is more to it though: an array might live on a device Device[i8[64]]
, etc. We also have optimization ASR->ASR passes that take a descriptor array and turn it into a pointer array.
Currently this information if it is a pointer array or a descriptor array is implicit in ASR and the rules are complex and getting more complex.
To make progress, let's do the following:
- Summarize all usages of arrays, and for each denote if it can be done with a pointer array or descriptor array or both
- Which use cases allow both?
- Which use cases require pointer arrays, but descriptor arrays would not work?
- Which use cases require descriptor arrays, but pointer arrays would not work?
Having answers to these questions will allow us to guide the design. The most obvious simpler design is to just have a flag in the Variable, if it is an array, what kind of array.
The flag can also be in the type, to allow to track this in an array expression like x(:) + y(:)
. If we do this, probably doing the Array
type, like List
, then we can add these flags there.