Skip to content

using KinesisAsyncClient: An exceptionCaught() event was fired. java.io.IOException: The channel was closed before the protocol could be determined. #2914

@filipglojnari

Description

@filipglojnari

Describe the bug

java.io.IOException: The channel was closed before the protocol could be determined. exception is thrown when using the aws-sdk KinesisAsyncClient. We start multiple threads and they are all publishing to the kinesis stream via KinesisAsyncClient.putRecord(). After some time (~6h) the application is running, we can see the warn logs, followed by exception that causes them.

WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

java.io.IOException: The channel was closed before the protocol could be determined.

Expected behavior

This should not be thrown to the user.

Current behavior

Warn and exception are thrown after publishing (for longer than ~6h) to kinesis stream from multi-thread env without knowing what exactly caused the exception from our code. Full message and stacktrace:

2021-11-02 09:46:59,487 4017786 [aws-java-sdk-NettyEventLoop-1-1] WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

  java.io.IOException: The channel was closed before the protocol could be determined.
          at software.amazon.awssdk.http.nio.netty.internal.http2.Http2SettingsFrameHandler.channelUnregistered(Http2SettingsFrameHandler.java:58)
          at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:198)        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:184)        at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:177)  
          at io.netty.channel.DefaultChannelPipeline$HeadContext.channelUnregistered(DefaultChannelPipeline.java:1388)       
          at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:198)        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:184)        at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:821)
          at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:827)
          at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
          at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
          at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
          at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
          at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
          at java.base/java.lang.Thread.run(Thread.java:834)

Steps to Reproduce

Publishing in multi-thread env to kinesis via KinesisAsyncClient.putRecord() for some time (~6h), and warn will be written out.

Possible Solution

Possible connected issue:
#2713

Context

No response

AWS Java SDK version used

2.17.90

JDK version used

OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.8+10)

Operating System and version

Ubuntu 18.04.5 LTS

Metadata

Metadata

Assignees

Labels

bugThis issue is a bug.closed-for-stalenessresponse-requestedWaiting on additional info and feedback. Will move to "closing-soon" in 10 days.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions