Skip to content

Commit 663a492

Browse files
liyezhang556520davies
authored andcommitted
[SPARK-14242][CORE][NETWORK] avoid copy in compositeBuffer for frame decoder
## What changes were proposed in this pull request? In this patch, we set the initial `maxNumComponents` to `Integer.MAX_VALUE` instead of the default size ( which is 16) when allocating `compositeBuffer` in `TransportFrameDecoder` because `compositeBuffer` will introduce too many memory copies underlying if `compositeBuffer` is with default `maxNumComponents` when the frame size is large (which result in many transport messages). For details, please refer to [SPARK-14242](https://issues.apache.org/jira/browse/SPARK-14242). ## How was this patch tested? spark unit tests and manual tests. For manual tests, we can reproduce the performance issue with following code: `sc.parallelize(Array(1,2,3),3).mapPartitions(a=>Array(new Array[Double](1024 * 1024 * 50)).iterator).reduce((a,b)=> a).length` It's easy to see the performance gain, both from the running time and CPU usage. Author: Zhang, Liye <[email protected]> Closes #12038 from liyezhang556520/spark-14242.
1 parent 05dbc28 commit 663a492

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

network/common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ private ByteBuf decodeNext() throws Exception {
141141
}
142142

143143
// Otherwise, create a composite buffer.
144-
CompositeByteBuf frame = buffers.getFirst().alloc().compositeBuffer();
144+
CompositeByteBuf frame = buffers.getFirst().alloc().compositeBuffer(Integer.MAX_VALUE);
145145
while (remaining > 0) {
146146
ByteBuf next = nextBufferForFrame(remaining);
147147
remaining -= next.readableBytes();

0 commit comments

Comments
 (0)