-
Notifications
You must be signed in to change notification settings - Fork 170
Design of SymbolicExpression types #1607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
To model mixed-mode numeric and symbolic expressions like |
In ASR the |
Ohh okay I thought we would be emitting calls to SymEngine objects while dealing with the backend but a pass would also do the needful . Also one small doubts I had was for something related to overloading of IntrinsicFunction.
Here the |
Also as you write about this
Simplifcations or optimizations during ASR creation sounds very cool . Do you have some operation or case in mind where this would come into the picture ? |
Thanks, @certik. The idea looks awesome. Once we have a good frontend interface and a working ASR, we can possibly have many symbolic optimizations at the ASR level (using pass-rewrites) which can save us a lot in the backend and also with the runtime performance.
and many more. |
Yes, the kind of optimizations we can do at compile time as ASR->ASR passes are evaluating the symbolic expression at compile time if the values are known. So for example, the above input: def f():
x: S = Symbol("x")
y: S = Symbol("y")
z: S = x + y
print(sin(z)) Can be fully transformed to just: def f():
print("sin(x+y)") Since there is no other side effects and everything is known. That can be achieved by calling SymEngine at compile time, in an ASR pass and just evaluating everything. But as I said, that comes later. |
Regarding the |
Would it make sense to introduce all binary operators ( Just a thought because the ASR's for something like Also however ,we handle this . Would we need |
Also for handling scientific constants like pi, golden ratio etc , would the best way be to have an IntrinsicFunction defining each one of these , or could we have a |
I would start with just using IntrinsicFunction for everything symbolic, and then we'll see how it goes. Yes, in principle any expression can be just IntrinsicFunction, but I do not recommend doing that, since that's a huge change and not clear it is a benefit. After we have more experience with IntrinsicFunction, and understand the pros and cons of each approach, we can unify things more. |
Yeah I just thought about it because as you said for |
I have a couple doubts here . Like after we have our ASR down , we need to implement a pass . But as you've mentioned would these functions be present in the IntrinsicFunction rewriting pass ( Because we essentially need to convert the whole thing like assignment and print statements
So shouldn't we have a custom pass But once this is done , I'm not fully sure about how our ASR looks like after that pass as in how do we store/represent our generatred C code ( |
I think most of these will be done in the backend simply emitting the proper code for each ASR node, like this: It needs to track where to deallocate the symbols ( |
Here is how to call diff --git a/src/libasr/codegen/llvm_utils.h b/src/libasr/codegen/llvm_utils.h
index e3a06ba97..e1922c1c7 100644
--- a/src/libasr/codegen/llvm_utils.h
+++ b/src/libasr/codegen/llvm_utils.h
@@ -29,6 +29,19 @@ namespace LCompilers {
builder.CreateCall(fn_printf, args);
}
+ static inline void symengine_str(llvm::LLVMContext &context, llvm::Module &module,
+ llvm::IRBuilder<> &builder, const std::vector<llvm::Value*> &args)
+ {
+ llvm::Function *fn_printf = module.getFunction("symengine_str");
+ if (!fn_printf) {
+ llvm::FunctionType *function_type = llvm::FunctionType::get(
+ llvm::Type::getVoidTy(context), {llvm::Type::getInt8PtrTy(context)}, true);
+ fn_printf = llvm::Function::Create(function_type,
+ llvm::Function::ExternalLinkage, "symengine_str", &module);
+ }
+ builder.CreateCall(fn_printf, args);
+ }
+
static inline void print_error(llvm::LLVMContext &context, llvm::Module &module,
llvm::IRBuilder<> &builder, const std::vector<llvm::Value*> &args)
{ |
Initially LPython will not know how to link, we just do it manually like this:
|
Here is a design to get started:
In ASR, this gets represented using a new
SymbolicExpression
type (and useS
in Python code, to keep things short). Later we can add more subtypes ofS
, such asSymbol
,SAdd
,STimes
,SInteger
, etc., as well as casting fromS
to some of these types (checked in Debug mode), so that one can write functions that only accept those, but for now we'll just useS
. The+
becomesSymbolicAdd
, anIntrinsicFunction
accepting twoSymbolicExpression
arguments. TheSymbol
becomesSymbolicSymbol(str) -> SymbolicExpression
, also anInstrinsicFunction
.In the IntrinsicFunction rewriting pass we implement all these by calling into C's API of SymEngine (https://github.com/symengine/symengine/blob/2b575b9be9bb499d866dc3e411e6368ca0d1bb42/symengine/tests/cwrapper/test_cwrapper.c#L19), effectively transforming the code to something like:
We can use any backend, such as the LLVM backend to compile this to an object file. Then at link time we have to link in the SymEngine library (and C++ runtime library).
We probably set the
allocatable
attribute forx
which is aVariable
of typeSymbolicExpression
, and treat it like an allocatable array or a string. We will reuse our existingallocatable
mechanism that ensures that things get properly deallocated when they get out of scope.This minimal design does not close any doors and we can later extend it in various ways, as needed (such as compile time simplification/evaluation, many subtypes, adding more dedicated ASR nodes if needed, various ASR optimizations, etc.).
The text was updated successfully, but these errors were encountered: