generated from amazon-archives/__template_Apache-2.0
-
Notifications
You must be signed in to change notification settings - Fork 142
Open
Labels
bugSomething isn't workingSomething isn't workingenhancementFeature request or enhancement on existing featuresFeature request or enhancement on existing features
Description
Describe the question/issue
We are testing out the S3 plugin for fluentbit in AWS EKS. Will it be possible to enable compression and multipart upload in the latest stable release?
With the output configuration below, fluentbit compresses the chunk and put upload is used (as logged below). We are expecting that the chunks will be compressed but multipart upload will be used until the total file size is reached. Is this a misunderstanding on our part?
Due to the behavior above, the s3 bucket contains a lot of small gz files.
Configuration
[OUTPUT]
Name s3
Match application.*
region ${AWS_REGION}
bucket ${S3_BUCKET_NAME}
total_file_size 256M
upload_timeout 5m
compression gzip
s3_key_format /logs-apps/%Y/%m/%d/%Y%m%d%H%M%S-$TAG-$UUID.gz
Fluent Bit Log Output
[2023/05/30 07:35:20] [ info] [output:s3:s3.0] Pre-compression upload_chunk_size= 5630650, After compression, chunk is only 106332 bytes, the chunk was too small, using PutObject to upload
Fluent Bit Version Info
public.ecr.aws/aws-observability/aws-for-fluent-bit:stable
Cluster Details
- No meshes
- EKS
- Worker Node EC2
- Daemonset fluentbit
wesley-tanner-zefr
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingenhancementFeature request or enhancement on existing featuresFeature request or enhancement on existing features