Skip to content

My ideal DX for Rust on AWS Lambda #4

Closed
@brainstorm

Description

@brainstorm

Screen Shot 2021-10-04 at 2 09 05 pm

Challenge accepted, @nmoutschen! ;)

Here's my (first ever) attempt at an Amazon (less-than-)6-pager for an "ideal world DX for Rust and AWS lambda". My hope is that it gets circulated at AWS, reaches leaders willing to take ownership of the goals outlined below so that customers (like me) get to enjoy better Rust lambdas developer experience and unleash its untapped potential.

It is now a draft since "Strategic priorities" might not be in line with how reports like this are structured, but bear with me... happy to edit/improve if there's uptake ;)

Introduction

This document aims to outline the current state of AWS Lambda runtime support for Rust and share strategic priorities on next iterations for its developer tooling ecosystem. AWS serverless teams mission is to work with developers interested in migrating current idle-prone workloads into optimized, safe and GC-free Rust lambdas.

Given the uptake of serverless and the recent move towards more cost-efficient ARM64 lambdas, removing Rust developer friction can increase organic adoption and, in turn, attract other developers that can port high performance computing workloads from on-premise HPC to Rust lambda (where applicable). AWS serverless teams focus for Q4 2021 should be to establish Rust as a first class citizen of the lambda ecosystem by improving the devops tooling around it, running benchmarks against other runtimes and updating documentation, app templates and demo repositories accordingly to ease onboarding.

Goals

In Q4 2021 AWS Rust serverless should plan focusing on achieving the following goals:

  1. Increase tooling quality: Better integration of sam-cli and/or sam-cli-cdk local runtime execution and deployment for all host systems and targets (i.e Apple Silicon). Also integrate with the native cargo Rust tool (no Makefile(s)).

  2. Measure and increase performance: Remove MUSL wrapping misconceptions from documentation. Through benchmarking and allocator tuning, Rust lambdas (could/should?) outperform wall time and memory consumption against other runtimes by at least 5-10% on average. Run the tests and add those to CI to spot performance regressions as best practices change.

  3. Change runtime naming: Currently Rust lambdas are offered as provided.al2 runtime, unintentionally leaving Rust as a "second class citizen" against the rest of officially supported runtimes. Simply aliasing provided.al2 to rust-1.x to mirror i.e "python3.9" lambdas could signal "supported and quality runtime at AWS". No changes w.r.t runtime underneath are needed and experts can continue using provided.al2, just aliasing/naming...Easy win?

Tenets

The following tenets are guiding principles we use to evaluate and prioritize Rust AWS lambda tooling improvement activities:

Quality over quantity

SAM CLI and/or CDK should be the canonical way to deploy rust lambdas without interfering with the expected Rust developer build tool: cargo. Several third party Rust crates are attempting to fill some of DX experience gaps from SAM and other official AWS tooling: cargo-aws-lambda, aws-build, minlambda, mu_runtime, rocket_lamb, warp_lambda. While this contributions are welcome, it also shows that the shortcomings could be addressed upstream in the official tooling.

Reduce developer friction

We want to focus on educating developers through in-depth technical content and tool documentation instead of relying solely on community knowledge transfer.

Invent and Simplify

We want Rust developers to feel at home when they approach AWS Rust lambdas and that means an officially supported aws cargo subcommand (via an officially supported AWS Rust crate).

Strategic priorities?

Ideally bring SAM-CLI, rust_lambda_runtime and other relevant Rust tooling ecosystem teams together and build an example that is deployable on Graviton2 instances with the following cargo commands (ideal scenario), supported officially by AWS via its documentation:

  1. cargo aws build
  2. cargo aws deploy

If the user wants to debug the lambda locally:

  1. cargo aws build
  2. cargo aws run [invoke | start-api]

The official cargo aws subcommand could potentially call other aws services (simple AWS CLI and/or SAM-CLI-(CDK) wrapper), but there's no need to wrap them all on a first iteration for this tooling.

Mentions

Please let me know if you wish to be removed from this thread, I have no intention to inconvenience anybody. I just think that, if anyone, you have the means to make a difference on this topic :)

@softprops @davidbarsky @jonhoo @mjasay
@praneetap @jfuss @sapessi @singledigit @hoffa
@coltonweaver @bahildebrand
@jdisanti @rcoh

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions