Skip to content

S3: can't use multipart due to large compression factor #674

@danielejuan-metr

Description

@danielejuan-metr

Describe the question/issue

We are testing out the S3 plugin for fluentbit in AWS EKS. Will it be possible to enable compression and multipart upload in the latest stable release?

With the output configuration below, fluentbit compresses the chunk and put upload is used (as logged below). We are expecting that the chunks will be compressed but multipart upload will be used until the total file size is reached. Is this a misunderstanding on our part?

Due to the behavior above, the s3 bucket contains a lot of small gz files.

Configuration

[OUTPUT]
        Name                      s3
        Match                     application.*
        region                    ${AWS_REGION}
        bucket                    ${S3_BUCKET_NAME}
        total_file_size           256M
        upload_timeout            5m
        compression               gzip
        s3_key_format             /logs-apps/%Y/%m/%d/%Y%m%d%H%M%S-$TAG-$UUID.gz

Fluent Bit Log Output

[2023/05/30 07:35:20] [ info] [output:s3:s3.0] Pre-compression upload_chunk_size= 5630650, After compression, chunk is only 106332 bytes, the chunk was too small, using PutObject to upload

Fluent Bit Version Info

public.ecr.aws/aws-observability/aws-for-fluent-bit:stable

Cluster Details

  • No meshes
  • EKS
  • Worker Node EC2
  • Daemonset fluentbit

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingenhancementFeature request or enhancement on existing features

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions