You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This contains a lot of detail that is irrelevant for most users:
Elemwise{...}. Almost all operations are Elemwise because users don't build graphs with scalars. I suggest we just call Add. The "type" of addition can be guessed from the input types which are visible with print_type=True. This will be even more relevant after Implement graph.vectorize and BlockwiseOp #306 where everything that is not an Elemwise will likely be a Blockwise. We can maybe add a kwarg to show the exact Op that is being used.
The inplace/no_inplace is seldom important for users. This information is still visible via either print_destroy_map or print_view_map. Maybe we can have a print_memory_map that is equivalent to setting both of those to True.
DimShuffles are incredibly common, but very obscure because they are a term only used in PyTensor. I suggest we specialize the name for the following 4 cases:
ExpandDims{axis/axes=dims}
DropDims{axis/axes=dims} (or Squeeze?)
Transpose{axis/axes=dims}
DimShuffle{order} (if it's a mix of the ones above)
The default dprint for the above graph would then look something like:
Which I think is way more readable! We can go one step further and rename <TensorType(float64, (?,))> to <Tensor(float64, shape=(?,))> or even <Vector(float64, shape=(?,))>
The text was updated successfully, but these errors were encountered:
This would be great, I have a really hard time parsing everything that's going on in that output usually, but your simplified proposal is much easier to parse.
On Aesara there also was the proposal to use more complex characters like ┕.
Uh oh!
There was an error while loading. Please reload this page.
This contains a lot of detail that is irrelevant for most users:
Elemwise
because users don't build graphs with scalars. I suggest we just callAdd
. The "type" of addition can be guessed from the input types which are visible withprint_type=True
. This will be even more relevant after Implementgraph.vectorize
andBlockwise
Op
#306 where everything that is not anElemwise
will likely be aBlockwise
. We can maybe add a kwarg to show the exact Op that is being used.print_destroy_map
orprint_view_map
. Maybe we can have aprint_memory_map
that is equivalent to setting both of those to True.The default dprint for the above graph would then look something like:
And
dprint(print_type=True, print_memory_map=True)
Which I think is way more readable! We can go one step further and rename
<TensorType(float64, (?,))>
to<Tensor(float64, shape=(?,))>
or even<Vector(float64, shape=(?,))>
The text was updated successfully, but these errors were encountered: