Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
180 changes: 180 additions & 0 deletions samples/cloudrun/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
# Connecting Cloud Run to Cloud SQL with the Python Connector

This guide provides a comprehensive walkthrough of how to connect a Cloud Run service to a Cloud SQL instance using the Cloud SQL Python Connector. It covers connecting to instances with both public and private IP addresses and demonstrates how to handle database credentials securely.

## Develop a Python Application

The following Python applications demonstrate how to connect to a Cloud SQL instance using the Cloud SQL Python Connector.

### `mysql/main.py` and `postgres/main.py`

These files contain the core application logic for connecting to a Cloud SQL for MySQL or PostgreSQL instance. They provide two separate authentication methods, each exposed at a different route:
- `/`: Password-based authentication
- `/iam`: IAM-based authentication


### `sqlserver/main.py`

This file contains the core application logic for connecting to a Cloud SQL for SQL Server instance. It uses the `cloud-sql-python-connector` to create a SQLAlchemy connection pool with password-based authentication at the `/` route.

> [!NOTE]
>
> Cloud SQL for SQL Server does not support IAM database authentication.


> [!NOTE]
> **Lazy Refresh**
>
> The sample code in all three `main.py` files initializes the `Connector` with `refresh_strategy=lazy`. This is a recommended approach to avoid connection errors and optimize cost by preventing background processes from running when the CPU is throttled.

## Global Variables and Lazy Instantiation

In a Cloud Run service, global variables are initialized when the container instance starts up. The application instance then handles subsequent requests until the container is spun down.

The `Connector` and SQLAlchemy `Engine` objects are defined as global variables (initially set to `None`) and are lazily instantiated (created only when needed) inside the request handlers.

This approach offers several benefits:

1. **Faster Startup:** By deferring initialization until the first request, the Cloud Run service can start listening for requests almost immediately, reducing cold start latency.
2. **Resource Efficiency:** Expensive operations, like establishing background connections or fetching secrets, are only performed when actually required.
3. **Connection Reuse:** Once initialized, the global `Connector` and `Engine` instances are reused for all subsequent requests to that container instance. This prevents the overhead of creating new connections for every request and avoids hitting connection limits.

## IAM Authentication Prerequisites


For IAM authentication to work, you must ensure two things:

1. **The Cloud Run service's service account has the `Cloud SQL Client` role.** You can grant this role with the following command:
```bash
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \
--role="roles/cloudsql.client"
```
Replace `PROJECT_ID` with your Google Cloud project ID and `SERVICE_ACCOUNT_EMAIL` with the email of the service account your Cloud Run service is using.

2. **The service account is added as a database user to your Cloud SQL instance.** You can do this with the following command:
```bash
gcloud sql users create SERVICE_ACCOUNT_EMAIL \
--instance=INSTANCE_NAME \
--type=cloud_iam_user
```
Replace `SERVICE_ACCOUNT_EMAIL` with the same service account email and `INSTANCE_NAME` with your Cloud SQL instance name.

For Password-based authentication to work:

1. **The Cloud Run service's service account has the `Secret Accessor` role.** You can grant this role with the following command:
```bash
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:SERVICE_ACCOUNT_EMAIL" \
--role="roles/secretmanager.secretAccessor"
```
Replace `PROJECT_ID` with your Google Cloud project ID and `SERVICE_ACCOUNT_EMAIL` with the email of the service account your Cloud Run service is using.

## Deploy the Application to Cloud Run

Follow these steps to deploy the application to Cloud Run.

### Build and Push the Docker Image

1. **Enable the Artifact Registry API:**

```bash
gcloud services enable artifactregistry.googleapis.com
```

2. **Create an Artifact Registry repository:**

```bash
gcloud artifacts repositories create REPO_NAME \
--repository-format=docker \
--location=REGION
```

3. **Configure Docker to authenticate with Artifact Registry:**

```bash
gcloud auth configure-docker REGION-docker.pkg.dev
```

4. **Build the Docker image (replace `mysql` with `postgres` or `sqlserver` as needed):**

```bash
docker build -t REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME mysql
```

5. **Push the Docker image to Artifact Registry:**

```bash
docker push REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME
```

### Deploy to Cloud Run

Deploy the container image to Cloud Run using the `gcloud run deploy` command.


**Sample Values:**
* `SERVICE_NAME`: `my-cloud-run-service`
* `REGION`: `us-central1`
* `PROJECT_ID`: `my-gcp-project-id`
* `REPO_NAME`: `my-artifact-repo`
* `IMAGE_NAME`: `my-app-image`
* `INSTANCE_CONNECTION_NAME`: `my-gcp-project-id:us-central1:my-instance-name`
* `DB_USER`: `my-db-user` (for password-based authentication)
* `DB_IAM_USER`: `[email protected]` (for IAM-based authentication)
* `DB_NAME`: `my-db-name`
* `DB_PASSWORD`: `my-user-pass-secret-name`
* `VPC_NETWORK`: `my-vpc-network`
* `SUBNET_NAME`: `my-vpc-subnet`


**For MySQL and PostgreSQL (Public IP):**

```bash
gcloud run deploy SERVICE_NAME \
--image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \
--set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,DB_SECRET_NAME=DB_SECRET_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \
--region=REGION \
--update-secrets=DB_PASSWORD=DB_PASSWORD:latest
```

**For MySQL and PostgreSQL (Private IP):**

```bash
gcloud run deploy SERVICE_NAME \
--image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \
--set-env-vars=DB_USER=DB_USER,DB_IAM_USER=DB_IAM_USER,DB_NAME=DB_NAME,DB_SECRET_NAME=DB_SECRET_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \
--network=VPC_NETWORK \
--subnet=SUBNET_NAME \
--vpc-egress=private-ranges-only \
--region=REGION \
--update-secrets=DB_PASSWORD=DB_PASSWORD:latest
```

**For SQL Server (Public IP):**

```bash
gcloud run deploy SERVICE_NAME \
--image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME \
--set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,DB_SECRET_NAME=DB_SECRET_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME \
--region=REGION \
--update-secrets=DB_PASSWORD=DB_PASSWORD:latest
```

**For SQL Server (Private IP):**

```bash
gcloud run deploy SERVICE_NAME \
--image=REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_name \
--set-env-vars=DB_USER=DB_USER,DB_NAME=DB_NAME,DB_SECRET_NAME=DB_SECRET_NAME,INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME,IP_TYPE=PRIVATE \
--network=VPC_NETWORK \
--subnet=SUBNET_NAME \
--vpc-egress=private-ranges-only \
--region=REGION \
--update-secrets=DB_PASSWORD=DB_PASSWORD:latest
```

> [!NOTE]
> **`For PSC connections`**
>
> To connect to the Cloud SQL instance with PSC connection type, create a PSC endpoint, a DNS zone and DNS record for the instance in the same VPC network as the Cloud Run service and replace the `IP_TYPE` in the deploy command with `PSC`. To configure DNS records, refer to [Connect to an instance using Private Service Connect](https://docs.cloud.google.com/sql/docs/mysql/configure-private-service-connect) guide
18 changes: 18 additions & 0 deletions samples/cloudrun/mysql/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.12-slim

# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True

# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .

# Install production dependencies.
RUN pip install --no-cache-dir -r requirements.txt

# Run the web service on container startup.
# Use gunicorn for production deployments.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
142 changes: 142 additions & 0 deletions samples/cloudrun/mysql/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
"""
Copyright 2025 Google LLC

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

import os
import sqlalchemy
from flask import Flask
from google.cloud.sql.connector import Connector, IPTypes

# Initialize Flask app
app = Flask(__name__)

# Connector and SQLAlchemy engines are initialized as None to allow for lazy instantiation.
#
# The Connector object is a global variable to ensure that the same connector
# instance is used across all requests. This prevents the unnecessary creation
# of new Connector instances, which is inefficient and can lead to connection
# limits being reached.
#
# Lazy instantiation (initializing the Connector and Engine only when needed)
# allows the Cloud Run service to start up faster, as it avoids performing
# initialization tasks (like fetching secrets or metadata) during startup.
connector = None
iam_engine = None
password_engine = None


# Function to create a database connection using IAM authentication
def get_iam_connection() -> sqlalchemy.engine.base.Connection:
"""Creates a database connection using IAM authentication."""
instance_connection_name = os.environ["INSTANCE_CONNECTION_NAME"]
db_user = os.environ["DB_IAM_USER"] # IAM service account email
db_name = os.environ["DB_NAME"]
ip_type_str = os.environ.get("IP_TYPE", "PUBLIC")
ip_type = IPTypes[ip_type_str]

conn = connector.connect(
instance_connection_name,
"pymysql",
user=db_user,
db=db_name,
ip_type=ip_type,
enable_iam_auth=True,
)
return conn


# Function to create a database connection using password-based authentication
def get_password_connection() -> sqlalchemy.engine.base.Connection:
"""Creates a database connection using password authentication."""
instance_connection_name = os.environ["INSTANCE_CONNECTION_NAME"]
db_user = os.environ["DB_USER"] # Database username
db_name = os.environ["DB_NAME"]
db_password = os.environ["DB_PASSWORD"]
ip_type_str = os.environ.get("IP_TYPE", "PUBLIC")
ip_type = IPTypes[ip_type_str]

conn = connector.connect(
instance_connection_name,
"pymysql",
user=db_user,
password=db_password,
db=db_name,
ip_type=ip_type,
)
return conn


# This example uses two distinct SQLAlchemy engines to demonstrate two different
# authentication methods (IAM and password-based) in the same application.
#
# In a typical production application, you would generally only need one
# SQLAlchemy engine, configured for your preferred authentication method.
# Both engines are defined globally to allow for connection pooling and
# reuse across requests.


def connect_with_password() -> sqlalchemy.engine.base.Connection:
"""Initializes the connector and password engine if necessary, then returns a connection."""
global connector, password_engine

if connector is None:
connector = Connector(refresh_strategy="lazy")

if password_engine is None:
password_engine = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=get_password_connection,
)

return password_engine.connect()


def connect_with_iam() -> sqlalchemy.engine.base.Connection:
"""Initializes the connector and IAM engine if necessary, then returns a connection."""
global connector, iam_engine

if connector is None:
connector = Connector(refresh_strategy="lazy")

if iam_engine is None:
iam_engine = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=get_iam_connection,
)

return iam_engine.connect()


@app.route("/")
def password_auth_index():
try:
with connect_with_password() as conn:
result = conn.execute(sqlalchemy.text("SELECT 1")).fetchall()
return f"Database connection successful (password authentication), result: {result}"
except Exception as e:
return f"Error connecting to the database (password authentication)", 500


@app.route("/iam")
def iam_auth_index():
try:
with connect_with_iam() as conn:
result = conn.execute(sqlalchemy.text("SELECT 1")).fetchall()
return f"Database connection successful (IAM authentication), result: {result}"
except Exception as e:
return f"Error connecting to the database (IAM authentication)", 500

if __name__ == "__main__":
app.run(host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
5 changes: 5 additions & 0 deletions samples/cloudrun/mysql/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
cloud-sql-python-connector[pymysql]
sqlalchemy
Flask
gunicorn
google-cloud-secret-manager
18 changes: 18 additions & 0 deletions samples/cloudrun/postgres/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.12-slim

# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True

# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .

# Install production dependencies.
RUN pip install --no-cache-dir -r requirements.txt

# Run the web service on container startup.
# Use gunicorn for production deployments.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
Loading
Loading