Skip to content

Bring README up to date with master branch #247

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jan 29, 2021

Conversation

voteblake
Copy link
Contributor

Issue #, if available:
#216, #232

Description of changes:

In #216 @davidbarsky mentioned that README updates were the main blocker for the next release. I've taken a pass at making changes to the README to reflect the current state of master. This includes the new crate structure, the new options for creating a handler, and references to the new examples.

I have not updated the Docker or Serverless Framework parts of the README as I haven't used either of those options before.

I'm also not sure if the statement about MSRV on line 11 is still accurate.

I don't expect this to be ready to merge as-is, but hopefully gets us closer to a new tagged release!

By submitting this pull request

  • I confirm that my contribution is made under the terms of the Apache 2.0 license.
  • I confirm that I've made a best effort attempt to update all relevant documentation.

README.md Outdated

## Example function

The code below creates a simple function that receives an event with a `greeting` and `name` field and returns a `GreetingResponse` message for the given name and greeting. Notice: to run these examples, we require a minimum Rust version of 1.31.
The code below creates a simple function that receives an event and echoes it back as a response. Notice: to run these examples, we require a minimum Rust version of 1.31.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An MSRV of 1.31 is not correct. Right now, it's at least a 1.42_, but I'd for the time being, I'd like to make the policy "latest stable Rust" until this reaches 1.0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • 1 on stable rust as a contract

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+ 1 (because github markdown)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does:

Notice: this crate is tested against latest stable Rust.

sound?

README.md Outdated
Comment on lines 19 to 23
#[lambda]
#[tokio::main]
async fn main(event: Value, _: Context) -> Result<Value, Error> {
Ok(event)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking about this, but I think we should cut the #[lambda] macro from the initial release. It's problematic when used with anything that expects to be initialized a single time (e.g., a logger) but a warm invoke occurs. In the case of a warm invoke, this will cause the runtime to panic.

Let's prefer a more full example?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I did not know this about the attribute macro. My primary reason for avoiding it has been more do with developer tooling, I found rust-analyser and cargo build output regarding errors to be much less useful when using it.

That type of error (panic on warm invocation) does sound like a reason to hold it back from the release. I can update the README to reflect that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I did not know this about the attribute macro. My primary reason for avoiding it has been more do with developer tooling, I found rust-analyser and cargo build output regarding errors to be much less useful when using it.

That's another compelling reason to not release and I completely forgot about it. I think the #[lambda] macro is one of those things that looks nice on a slide/screenshot, but is something most customers will quickly outgrow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not know this about the attribute macro.

this might just be a matter of documentation. I mentioned this in the lambda http docs as a tradeoff to consider. I still think its useful for specific cases but it's also good to be mindful of the tradeoffs.

README.md Outdated
Comment on lines 94 to 70
--payload '{"firstName": "world"}' \
--payload '{"Hello,": "world!"}' \
output.json
$ cat output.json # Prints: {"message":"Hello, world!"}
$ cat output.json # Prints: {"Hello,": "world!"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I'm in favor of this chance. Can we prefer the original?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the previous functionality back to the README and the hello-without-macro example that it references. It is working on and generating an untyped json value. Let me know if you'd like to add custom input and output events back to this headline example as well. Leaving the discussion on custom events to later in the documentation allows a gentler introduction in my opinion.


### Deployment

There are currently multiple ways of building this package: manually, and with the [Serverless framework](https://serverless.com/framework/).
There are currently multiple ways of building this package: manually with the AWS CLI, and with the [Serverless framework](https://serverless.com/framework/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's maybe include the SAM CLI with the Makefile approach and (hopefully) aws/aws-lambda-builders#174, once that is merged.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll see... :) I'll try to keep a some light pressure on that. Ultimately SAM is the most likely AWS-suggested approach to expect with SAM being the tool AWS pushes. Not having a SAM story is worse than not having some SAM story. I agree with the adding SAM + Makefile example being useful

s551228258327922884_p1702_i14_w300

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd much more believe in terraform ;d, could add an example with that

Copy link
Contributor

@rimutaka rimutaka Jul 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd much more believe in terraform ;d, could add an example with that

I prefer TF too, but if AWS prefers SAM then what most people will use anyway.
@Veetaha if you do make an example, can we put it in examples folder with a mention and a link to it from README to keep it concise?

Copy link

@brainstorm brainstorm Sep 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using CDK for deployment of Rust lambdas these days. Switched to SAM now

If people adopt SAM as a Rust-lambda "blueprint project", there I'll go too... My current DX fail with Rust lambdas is that I've been unable to find a lightweight and properly documented way to debug Rust lambdas locally... EDIT: this could work for the time being, I guess: https://github.com/rimutaka/lambda-debug-proxy

My ideal scenario: not having to deal with docker (i.e: lambci) and my VSCode session having all the variable values, watchpoints and backtraces I would normally have for "regular local binaries", ideally with proper observability all the way down to FFI (C bindgen) crates.

Am I dreaming or is this available today and I've missed it? :)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially I would like something like this for Rust+SAM+lambdas: https://pawelgrzybek.com/attach-visual-studio-code-debugger-to-sam-serverless-application-model-local-endpoint/ ... does anybody has such a setup? @rimutaka ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brainstorm, That debug-proxy works for me just fine and I use CLI/TF for deployment.

Copy link
Contributor

@coltonweaver coltonweaver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reviewing this PR, it seems all of the past feedback had been accounted for and things seem accurate to me. With that, I'm approving it.

@coltonweaver coltonweaver dismissed davidbarsky’s stale review January 29, 2021 04:20

Appears that the changes requested in this review were addressed!

@coltonweaver coltonweaver merged commit 721f616 into awslabs:master Jan 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants