Skip to content

mingw: git stash push hangs if patch > 8MB #553

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

SyntevoAlex
Copy link

@SyntevoAlex SyntevoAlex commented Feb 13, 2020

Changes since V2
------------------
Moved test to commit message.

Changes since V1
------------------
Some polishing based on code review in V1
1) Fixed some spelling in commit message
2) Reworked test to be more compatible with different shells

------------------
Please read the commit message for more information.

The specific problem of `git stash push` exists since `git stash`
was converted into built-in [1].

On a side note, I think that `git stash push` could be optimized by
replacing the code that reads entire `git diff-index` into memory
and then sends it to `git apply`. With large stash, that could mean
handling a very large patch.

Is it possible to instead directly invoke (without even starting a
new process) something like `git revert --no-commit -m 1 7091f172` ?

[1] Commit d553f538 ("stash: convert push to builtin" 2019-02-26)

Cc: Paul-Sebastian Ungureanu <[email protected]>
Cc: Erik Faye-Lund <[email protected]>

@SyntevoAlex SyntevoAlex changed the title mingw: workaround for hangs when sending STDIN mingw: git stash push hangs if patch > 8MB Feb 13, 2020
@SyntevoAlex
Copy link
Author

@dscho FYI, git-for-windows hangs when trying to push a large stash.

@SyntevoAlex
Copy link
Author

/submit

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 13, 2020

Submitted as [email protected]

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 13, 2020

On the Git mailing list, Alexandr Miloslavskiy wrote (reply to this):

///////////////////////////////////////////////////////////////////////////////
// NTDLL declarations
typedef struct _FILE_PIPE_LOCAL_INFORMATION {
	ULONG NamedPipeType;
	ULONG NamedPipeConfiguration;
	ULONG MaximumInstances;
	ULONG CurrentInstances;
	ULONG InboundQuota;
	ULONG ReadDataAvailable;
	ULONG OutboundQuota;
	ULONG WriteQuotaAvailable;
	ULONG NamedPipeState;
	ULONG NamedPipeEnd;
} FILE_PIPE_LOCAL_INFORMATION, * PFILE_PIPE_LOCAL_INFORMATION;

typedef struct _IO_STATUS_BLOCK
{
	union {
		DWORD Status;
		PVOID Pointer;
	} u;
	ULONG_PTR Information;
} IO_STATUS_BLOCK, * PIO_STATUS_BLOCK;

typedef enum _FILE_INFORMATION_CLASS {
	FilePipeLocalInformation = 24
} FILE_INFORMATION_CLASS, * PFILE_INFORMATION_CLASS;

typedef DWORD (WINAPI* PNtQueryInformationFile)(HANDLE, 
IO_STATUS_BLOCK*, VOID*, ULONG, FILE_INFORMATION_CLASS);
///////////////////////////////////////////////////////////////////////////////

ULONG GetPipeAvailWriteBuffer(HANDLE a_PipeHandle)
{
	static PNtQueryInformationFile NtQueryInformationFile = 
(PNtQueryInformationFile)GetProcAddress(GetModuleHandleW(L"ntdll.dll"), 
"NtQueryInformationFile");

	IO_STATUS_BLOCK statusBlock = {};
	FILE_PIPE_LOCAL_INFORMATION pipeInformation = {};
	if (0 != NtQueryInformationFile(a_PipeHandle, &statusBlock, 
&pipeInformation, sizeof(pipeInformation), FilePipeLocalInformation))
		assert(0);

	return pipeInformation.WriteQuotaAvailable;
}

void ReadPipe(HANDLE a_Pipe, DWORD a_Size)
{
	void* buffer = malloc(a_Size);
	DWORD bytesDone = 0;
	assert(ReadFile(a_Pipe, buffer, a_Size, &bytesDone, NULL));
	assert(bytesDone == a_Size);
	free(buffer);
}

void WritePipe(HANDLE a_Pipe, DWORD a_Size)
{
	void* buffer = malloc(a_Size);
	DWORD bytesDone = 0;
	assert(WriteFile(a_Pipe, buffer, a_Size, &bytesDone, NULL));
	assert(bytesDone == a_Size);
	free(buffer);
}

struct ThreadReadParam
{
	HANDLE Pipe;
	DWORD  Size;
};

DWORD WINAPI ThreadReadPipe(void* a_Param)
{
	const ThreadReadParam* param = (const ThreadReadParam*)a_Param;
	ReadPipe(param->Pipe, param->Size);
	return 0;
}

void Test()
{
	HANDLE readPipe  = 0;
	HANDLE writePipe = 0;
	const DWORD pipeBufferSize = 0x8000;
	assert(CreatePipe(&readPipe, &writePipe, NULL, pipeBufferSize));

	DWORD expectedBufferSize = pipeBufferSize;
	assert(expectedBufferSize == GetPipeAvailWriteBuffer(writePipe));

	// Test 1: nothing unexpected here.
	// Write some data to pipe, occupying portion of write buffer.
	{
		const DWORD size = 0x1000;
		WritePipe(writePipe, size);

		expectedBufferSize -= size;
		assert(expectedBufferSize == GetPipeAvailWriteBuffer(writePipe));
	}

	// Test 2: nothing unexpected here.
	// Read some of written data, releasing portion of write buffer.
	{
		const DWORD size = 0x0800;
		ReadPipe(readPipe, size);

		expectedBufferSize += size;
		assert(expectedBufferSize == GetPipeAvailWriteBuffer(writePipe));
	}

	// Test 3: nothing unexpected here.
	// Read remaining written data, releasing entire buffer.
	{
		const DWORD size = 0x0800;
		ReadPipe(readPipe, size);

		expectedBufferSize += size;
		assert(expectedBufferSize == GetPipeAvailWriteBuffer(writePipe));
	}

	// Test 4: that's the unexpected part.
	// Start reading the empty pipe and this reduces the *write* buffer.
	{
		ThreadReadParam param;
		param.Pipe = readPipe;
		param.Size = 0x8000;
		
		HANDLE thread = CreateThread(NULL, 0, ThreadReadPipe, &param, 0, NULL);
		Sleep(1000);
		
		expectedBufferSize -= param.Size;
		assert(expectedBufferSize == GetPipeAvailWriteBuffer(writePipe));

		// Write pipe to release thread
		WritePipe(writePipe, param.Size);
		WaitForSingleObject(thread, INFINITE);
		CloseHandle(thread);

		expectedBufferSize += param.Size;
		assert(expectedBufferSize == GetPipeAvailWriteBuffer(writePipe));
	}

	CloseHandle(writePipe);
	CloseHandle(readPipe);
}

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 13, 2020

On the Git mailing list, Eric Sunshine wrote (reply to this):

On Thu, Feb 13, 2020 at 1:40 PM Alexandr Miloslavskiy via GitGitGadget
<[email protected]> wrote:
> 3) Make `poll()` always reply "writable" for write end of the pipe
>    Afterall it seems that cygwin (accidentally?) does that for years.
>    Also, it should be noted that `pump_io_round()` writes 8MB blocks,
>    completely ignoring the fact that pipe's buffer size is only 8KB,
>    which means that pipe gets clogged many times during that single
>    write. This may invite a deadlock, if child's STDERR/STDOUT gets
>    clogged while it's trying to deal with 8MB of STDIN. Such deadlocks
>    could  be defeated with writing less then pipe's buffer size per

s/then/than/

>    round, and always reading everything from STDOUT/STDERR before
>    starting next round. Therefore, making `poll()` always reply
>    "writable" shouldn't cause any new issues or block any future
>    solutions.
> 4) Increase the size of the pipe's buffer
>    The difference between `BytesInQueue` and `QuotaUsed` is the size
>    of pending reads. Therefore, if buffer is bigger then size of reads,

s/then/than/

>    `poll()` won't hang so easily. However, I found that for example
>    `strbuf_read()` will get more and more hungry as it reads large inputs,
>    eventually surpassing any reasonable pipe buffer size.
> diff --git a/t/t3903-stash.sh b/t/t3903-stash.sh
> +test_expect_success 'stash handles large files' '
> +       printf "%1023s\n%.0s" "x" {1..16384} >large_file.txt &&
> +       git stash push --include-untracked -- large_file.txt
> +'

Use of {1..16384} is not portable across shells. You should be able to
achieve something similar by assigning a really large value to a shell
variable and then echoing that value to "large_file.txt". Something
like:

    x=0123456789
    x=$x$x$x$x$x$x$x$x$x$x
    x=$x$x$x$x$x$x$x$x$x$x
    ...and so on...
    echo $x >large_file.txt &&

or any other similar construct.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 13, 2020

On the Git mailing list, Alexandr Miloslavskiy wrote (reply to this):

On 13.02.2020 19:56, Eric Sunshine wrote:
>>     clogged while it's trying to deal with 8MB of STDIN. Such deadlocks
>>     could  be defeated with writing less then pipe's buffer size per
> 
> s/then/than/
> 
>>     of pending reads. Therefore, if buffer is bigger then size of reads,
> 
> s/then/than/
> 
>> +test_expect_success 'stash handles large files' '
>> +       printf "%1023s\n%.0s" "x" {1..16384} >large_file.txt &&
>> +       git stash push --include-untracked -- large_file.txt
>> +'
> 
> Use of {1..16384} is not portable across shells. You should be able to
> achieve something similar by assigning a really large value to a shell
> variable and then echoing that value to "large_file.txt". Something
> like:
> 
>      x=0123456789
>      x=$x$x$x$x$x$x$x$x$x$x
>      x=$x$x$x$x$x$x$x$x$x$x
>      ...and so on...
>      echo $x >large_file.txt &&
> 
> or any other similar construct.

Thanks for having a look! I will address these in V2 next week.

@SyntevoAlex
Copy link
Author

/submit

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 17, 2020

Submitted as [email protected]

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 17, 2020

On the Git mailing list, Eric Sunshine wrote (reply to this):

On Mon, Feb 17, 2020 at 11:26 AM Alexandr Miloslavskiy via
GitGitGadget <[email protected]> wrote:
> diff --git a/t/t3903-stash.sh b/t/t3903-stash.sh
> +test_expect_success 'stash handles large files' '
> +       x=0123456789abcde\n && # 16

Did you intend for the \n in this assignment to be a literal newline?
Every shell with which I tested treats it instead as an escaped 'n'.

> +       x=$x$x$x$x$x$x$x$x  && # 128
> +       x=$x$x$x$x$x$x$x$x  && # 1k
> +       x=$x$x$x$x$x$x$x$x  && # 8k
> +       x=$x$x$x$x$x$x$x$x  && # 64k
> +       x=$x$x$x$x$x$x$x$x  && # 512k
> +       x=$x$x$x$x$x$x$x$x  && # 4m
> +       x=$x$x              && # 8m
> +       echo $x >large_file.txt &&
> +       unset x             && # release memory

By the way, are the embedded newlines actually important to the test
itself, or are they just for human consumption if the test fails? I
ask because I was curious about how other tests create large files,
and found that a mechanism similar to your original (but without the
pitfalls) has been used. For instance, t1050-large.sh uses:

    printf "%2000000s" X >large1 &&

which is plenty portable and (presumably) doesn't have such demanding
memory consumption.

Explanation
-----------
The problem here is flawed `poll()` implementation. When it tries to
see if pipe can be written without blocking, it eventually calls
`NtQueryInformationFile()` and tests `WriteQuotaAvailable`. However,
the meaning of quota was misunderstood. The value of quota is reduced
when either some data was written to a pipe, *or* there is a pending
read on the pipe. Therefore, if there is a pending read of size >= then
the pipe's buffer size, poll() will think that pipe is not writable and
will hang forever, usually that means deadlocking both pipe users.

I have studied the problem and found that Windows pipes track two values:
`QuotaUsed` and `BytesInQueue`. The code in `poll()` apparently wants to
know `BytesInQueue` instead of quota. Unfortunately, `BytesInQueue` can
only be requested from read end of the pipe, while `poll()` receives
write end.

The git's implementation of `poll()` was copied from gnulib, which also
contains a flawed implementation up to today.

I also had a look at implementation in cygwin, which is also broken in a
subtle way. It uses this code in `pipe_data_available()`:
	fpli.WriteQuotaAvailable = (fpli.OutboundQuota - fpli.ReadDataAvailable)
However, `ReadDataAvailable` always returns 0 for the write end of the pipe,
turning the code into an obfuscated version of returning pipe's total
buffer size, which I guess will in turn have `poll()` always say that pipe
is writable. The commit that introduced the code doesn't say anything about
this change, so it could be some debugging code that slipped in.

These are the typical sizes used in git:
0x2000 - default read size in `strbuf_read()`
0x1000 - default read size in CRT, used by `strbuf_getwholeline()`
0x2000 - pipe buffer size in compat\mingw.c

As a consequence, as soon as child process uses `strbuf_read()`,
`poll()` in parent process will hang forever, deadlocking both
processes.

This results in two observable behaviors:
1) If parent process begins sending STDIN quickly (and usually that's
   the case), then first `poll()` will succeed and first block will go
   through. MAX_IO_SIZE_DEFAULT is 8MB, so if STDIN exceeds 8MB, then
   it will deadlock.
2) If parent process waits a little bit for any reason (including OS
   scheduler) and child is first to issue `strbuf_read()`, then it will
   deadlock immediately even on small STDINs.

The problem is illustrated by `git stash push`, which will currently
read the entire patch into memory and then send it to `git apply` via
STDIN. If patch exceeds 8MB, git hangs on Windows.

Possible solutions
------------------
1) Somehow obtain `BytesInQueue` instead of `QuotaUsed`
   I did a pretty thorough search and didn't find any ways to obtain
   the value from write end of the pipe.
2) Also give read end of the pipe to `poll()`
   That can be done, but it will probably invite some dirty code,
   because `poll()`
   * can accept multiple pipes at once
   * can accept things that are not pipes
   * is expected to have a well known signature.
3) Make `poll()` always reply "writable" for write end of the pipe
   Afterall it seems that cygwin (accidentally?) does that for years.
   Also, it should be noted that `pump_io_round()` writes 8MB blocks,
   completely ignoring the fact that pipe's buffer size is only 8KB,
   which means that pipe gets clogged many times during that single
   write. This may invite a deadlock, if child's STDERR/STDOUT gets
   clogged while it's trying to deal with 8MB of STDIN. Such deadlocks
   could be defeated with writing less than pipe's buffer size per
   round, and always reading everything from STDOUT/STDERR before
   starting next round. Therefore, making `poll()` always reply
   "writable" shouldn't cause any new issues or block any future
   solutions.
4) Increase the size of the pipe's buffer
   The difference between `BytesInQueue` and `QuotaUsed` is the size
   of pending reads. Therefore, if buffer is bigger than size of reads,
   `poll()` won't hang so easily. However, I found that for example
   `strbuf_read()` will get more and more hungry as it reads large inputs,
   eventually surpassing any reasonable pipe buffer size.

Chosen solution
---------------
Make `poll()` always reply "writable" for write end of the pipe.
Hopefully one day someone will find a way to implement it properly.

Reproduction
------------
printf "%8388608s" X >large_file.txt
git stash push --include-untracked -- large_file.txt

I have decided not to include this as test to avoid slowing down the
test suite. I don't expect the specific problem to come back, and
chances are that `git stash push` will be reworked to avoid sending the
entire patch via STDIN.

Signed-off-by: Alexandr Miloslavskiy <[email protected]>
@gitgitgadget
Copy link

gitgitgadget bot commented Feb 17, 2020

On the Git mailing list, Junio C Hamano wrote (reply to this):

Eric Sunshine <[email protected]> writes:

> ... For instance, t1050-large.sh uses:
>
>     printf "%2000000s" X >large1 &&
>
> which is plenty portable and (presumably) doesn't have such demanding
> memory consumption.

Yes, I had the exact same reaction to echoing large string with
literal backslash-en in it ;-)  Thanks for reviewing and teaching.

@SyntevoAlex
Copy link
Author

/submit

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 17, 2020

Submitted as [email protected]

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 17, 2020

On the Git mailing list, Alexandr Miloslavskiy wrote (reply to this):

On 17.02.2020 18:24, Eric Sunshine wrote:
>> +       x=0123456789abcde\n && # 16
> 
> Did you intend for the \n in this assignment to be a literal newline?
> Every shell with which I tested treats it instead as an escaped 'n'.

I'm such a novice shell script writer :(
Yes, I intended a newline.

> By the way, are the embedded newlines actually important to the test
> itself, or are they just for human consumption if the test fails?I
> ask because I was curious about how other tests create large files,
> and found that a mechanism similar to your original (but without the
> pitfalls) has been used. For instance, t1050-large.sh uses:
> 
>      printf "%2000000s" X >large1 &&
> 
> which is plenty portable and (presumably) doesn't have such demanding
> memory consumption.

They are not important to the test; the test only needs to internally 
have a 8+ mb patch.

This only comes from my feeling that super-large lines could cause other 
unexpected things, such as hitting various completely reasonable limits 
and/or causing unwanted slowdowns. Frankly, I didn't test.

Frankly, I already had concerns about adding the test. Now I have 
re-evaluated things and finally decided to move the test into commit 
message instead. With it, all compatibility etc questions are resolved.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 18, 2020

On the Git mailing list, Junio C Hamano wrote (reply to this):

"Alexandr Miloslavskiy via GitGitGadget" <[email protected]>
writes:

> From: Alexandr Miloslavskiy <[email protected]>
>
> Explanation
> -----------
> The problem here is flawed `poll()` implementation. When it tries to
> see if pipe can be written without blocking, it eventually calls
> `NtQueryInformationFile()` and tests `WriteQuotaAvailable`. However,
> the meaning of quota was misunderstood. The value of quota is reduced
> when either some data was written to a pipe, *or* there is a pending
> read on the pipe. Therefore, if there is a pending read of size >= then
> the pipe's buffer size, poll() will think that pipe is not writable and
> will hang forever, usually that means deadlocking both pipe users.
> ...
> Chosen solution
> ---------------
> Make `poll()` always reply "writable" for write end of the pipe.
> Hopefully one day someone will find a way to implement it properly.
>
> Reproduction
> ------------
> printf "%8388608s" X >large_file.txt
> git stash push --include-untracked -- large_file.txt
>
> I have decided not to include this as test to avoid slowing down the
> test suite. I don't expect the specific problem to come back, and
> chances are that `git stash push` will be reworked to avoid sending the
> entire patch via STDIN.
>
> Signed-off-by: Alexandr Miloslavskiy <[email protected]>
> ---

Thanks for a detailed description.

I notice that we saw no comments from Windows experts for these
three rounds.  Can somebody give an Ack (or nack) on it at least?

I picked obvious "experts" in the output from 

    $ git shortlog --since=1.year --no-merges master compat/ming\* compat/win\*

Thanks.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 18, 2020

This branch is now known as am/mingw-poll-fix.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 18, 2020

This patch series was integrated into pu via git@9878f09.

@gitgitgadget gitgitgadget bot added the pu label Feb 18, 2020
@gitgitgadget
Copy link

gitgitgadget bot commented Feb 19, 2020

This patch series was integrated into pu via git@a19ddba.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 21, 2020

This patch series was integrated into pu via git@fd2540e.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 21, 2020

This patch series was integrated into pu via git@ccff6a0.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 22, 2020

This patch series was integrated into pu via git@4b7a48e.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 24, 2020

This patch series was integrated into pu via git@4dab768.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2020

This patch series was integrated into pu via git@55cffcc.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2020

This patch series was integrated into pu via git@0bbfb1e.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 26, 2020

This patch series was integrated into pu via git@af93452.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 27, 2020

On the Git mailing list, Johannes Schindelin wrote (reply to this):

Hi Junio & Alex,

On Tue, 18 Feb 2020, Junio C Hamano wrote:

> "Alexandr Miloslavskiy via GitGitGadget" <[email protected]>
> writes:
>
> > From: Alexandr Miloslavskiy <[email protected]>
> >
> > Explanation
> > -----------
> > The problem here is flawed `poll()` implementation. When it tries to
> > see if pipe can be written without blocking, it eventually calls
> > `NtQueryInformationFile()` and tests `WriteQuotaAvailable`. However,
> > the meaning of quota was misunderstood. The value of quota is reduced
> > when either some data was written to a pipe, *or* there is a pending
> > read on the pipe. Therefore, if there is a pending read of size >= then

I usually try to refrain from grammar policing, but in this case, the typo
"then" (instead of "than") threw me.

Other than that, I think the patch is fine. At least it works as
advertised in my hands.

Thanks,
Dscho

> > the pipe's buffer size, poll() will think that pipe is not writable and
> > will hang forever, usually that means deadlocking both pipe users.
> > ...
> > Chosen solution
> > ---------------
> > Make `poll()` always reply "writable" for write end of the pipe.
> > Hopefully one day someone will find a way to implement it properly.
> >
> > Reproduction
> > ------------
> > printf "%8388608s" X >large_file.txt
> > git stash push --include-untracked -- large_file.txt
> >
> > I have decided not to include this as test to avoid slowing down the
> > test suite. I don't expect the specific problem to come back, and
> > chances are that `git stash push` will be reworked to avoid sending the
> > entire patch via STDIN.
> >
> > Signed-off-by: Alexandr Miloslavskiy <[email protected]>
> > ---
>
> Thanks for a detailed description.
>
> I notice that we saw no comments from Windows experts for these
> three rounds.  Can somebody give an Ack (or nack) on it at least?
>
> I picked obvious "experts" in the output from
>
>     $ git shortlog --since=1.year --no-merges master compat/ming\* compat/win\*
>
> Thanks.
>

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 27, 2020

This patch series was integrated into pu via git@b01f21a.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 27, 2020

On the Git mailing list, Junio C Hamano wrote (reply to this):

Johannes Schindelin <[email protected]> writes:

> I usually try to refrain from grammar policing, but in this case, the typo
> "then" (instead of "than") threw me.
>
> Other than that, I think the patch is fine. At least it works as
> advertised in my hands.

Thanks, both.

Let's mark it for 'next', then.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 28, 2020

This patch series was integrated into pu via git@6c6dc3d.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 28, 2020

This patch series was integrated into pu via git@0661d60.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 2, 2020

This patch series was integrated into pu via git@6f69a6f.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 3, 2020

This patch series was integrated into pu via git@1aecd4a.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 3, 2020

This patch series was integrated into next via git@7082619.

@gitgitgadget gitgitgadget bot added the next label Mar 3, 2020
@gitgitgadget
Copy link

gitgitgadget bot commented Mar 5, 2020

This patch series was integrated into pu via git@135942e.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 6, 2020

This patch series was integrated into pu via git@31c28f5.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 9, 2020

This patch series was integrated into pu via git@1c8479b.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 10, 2020

This patch series was integrated into pu via git@1ac37de.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 10, 2020

This patch series was integrated into next via git@1ac37de.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 10, 2020

This patch series was integrated into master via git@1ac37de.

@gitgitgadget gitgitgadget bot added the master label Mar 10, 2020
@gitgitgadget gitgitgadget bot closed this Mar 10, 2020
@gitgitgadget
Copy link

gitgitgadget bot commented Mar 10, 2020

Closed via 1ac37de.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant