Skip to content

restarting RIC #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
bahtou opened this issue Jan 9, 2021 · 4 comments
Open

restarting RIC #9

bahtou opened this issue Jan 9, 2021 · 4 comments

Comments

@bahtou
Copy link

bahtou commented Jan 9, 2021

Rebuilding the image with every code change is not an ideal local developer experience.

Is there a way to restart the emulator on code change? Volume mount the function directory into the container, perform code changes to the function, and have the emulator restart?

@djpate
Copy link

djpate commented Apr 15, 2021

I got this working with the following in my package.json

"live": "tsc-watch --onSuccess \"/usr/local/bin/aws-lambda-rie /usr/local/bin/aws-lambda-ric /var/task/build/index.handler\"",

@chrisjsherm
Copy link

chrisjsherm commented Nov 10, 2022

I got this working with the following in my package.json

"live": "tsc-watch --onSuccess \"/usr/local/bin/aws-lambda-rie /usr/local/bin/aws-lambda-ric /var/task/build/index.handler\"",

This put me on the right track, but I received an error saying the server was already listening on the port.

I changed it to:

"live": "tsc-watch --onSuccess \"docker compose restart my-container-name\"",

Dockerfile:

FROM node:18-alpine AS builder

# Install NPM dependencies for function
WORKDIR /app
COPY tsconfig.json package*.json ./
RUN npm clean-install --ignore-scripts

# Copy source files
COPY src src

# Transpile TypeScript to JavaScript
RUN npm run build

################ Create dev stage #######################
FROM amazon/aws-lambda-nodejs:18 as dev

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ] 

################ Create runtime stage #######################
FROM amazon/aws-lambda-nodejs:18 as prod

# Install only production dependencies
COPY package*.json ${LAMBDA_TASK_ROOT}/
RUN npm clean-install --ignore-scripts --omit=dev

# Copy transpiled code
COPY --from=builder /app/dist ${LAMBDA_TASK_ROOT}

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ] 

docker-compose.yml:

version: '3.9'
services:
  web:
    build:
      context: .
      target: dev
    ports: 
      - 9000:8080
      - 9229:9229 # debugger
    environment:
      ValidatedEmailAddress: [email protected]
    volumes:
         - ~/.aws:/root/.aws:ro # AWS credentials
         - ./dist:/var/task
         - ./node_modules:/var/task/node_modules

@govindrai
Copy link

govindrai commented Jul 7, 2023

I don't think either of those are the right solutions.

The first works outside of docker which means you have to account for inconsistencies between different OS' that your devs are using. The second one forces container restart which is costly in terms of both time and resources.

It would be very helpful if the client could add support for watching and hot-reloading the handler code upon code changes.

This along with the ability to enter debugger mode would make local development of lambdas amazing.

@rbalman
Copy link

rbalman commented Dec 7, 2023

I am currently using nodemon --exec for the workaround and is working well enough.

 "scripts": {
    "local:lambda": "nodemon --exec \"/usr/local/bin/aws-lambda-rie /usr/local/bin/npx aws-lambda-ric index.handler\""
  }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants