## aten.gelu + aten.tanh - **Function Schema**: `torch.ops.aten.gelu.default: ((torch.float32,), {})`, `torch.ops.aten.tanh.default: ((torch.float32,), {})` - **Original PyTorch API**: `torch.gelu`, `torch.tanh` - **Relevant TensorRT Documentation**: [IActivationLayer](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Graph/Layers.html?highlight=tanh#tensorrt.IActivationLayer) - Potentially can take inspiration from existing TorchScript [GeLU lowering pass](https://github.com/pytorch/TensorRT/blob/20277d466b06694dd90194a01c78753b85e5b5aa/core/lowering/passes/reduce_gelu.cpp) Add support for `gelu` and `tanh` as [aten converters](https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/fx/converters/aten_ops_converters.py).