Skip to content

Commit aef52ca

Browse files
authored
gh-128519: Align the docstring of untokenize() to match the docs (#128521)
1 parent a62ba52 commit aef52ca

File tree

1 file changed

+4
-10
lines changed

1 file changed

+4
-10
lines changed

Lib/tokenize.py

+4-10
Original file line numberDiff line numberDiff line change
@@ -318,16 +318,10 @@ def untokenize(iterable):
318318
with at least two elements, a token number and token value. If
319319
only two tokens are passed, the resulting output is poor.
320320
321-
Round-trip invariant for full input:
322-
Untokenized source will match input source exactly
323-
324-
Round-trip invariant for limited input:
325-
# Output bytes will tokenize back to the input
326-
t1 = [tok[:2] for tok in tokenize(f.readline)]
327-
newcode = untokenize(t1)
328-
readline = BytesIO(newcode).readline
329-
t2 = [tok[:2] for tok in tokenize(readline)]
330-
assert t1 == t2
321+
The result is guaranteed to tokenize back to match the input so
322+
that the conversion is lossless and round-trips are assured.
323+
The guarantee applies only to the token type and token string as
324+
the spacing between tokens (column positions) may change.
331325
"""
332326
ut = Untokenizer()
333327
out = ut.untokenize(iterable)

0 commit comments

Comments
 (0)