Skip to content

Commit 9c5a86d

Browse files
[3.13] gh-128519: Align the docstring of untokenize() to match the docs (GH-128521) (#128531)
(cherry picked from commit aef52ca) Co-authored-by: Tomas R <[email protected]>
1 parent 838e8a2 commit 9c5a86d

File tree

1 file changed

+4
-10
lines changed

1 file changed

+4
-10
lines changed

Lib/tokenize.py

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -318,16 +318,10 @@ def untokenize(iterable):
318318
with at least two elements, a token number and token value. If
319319
only two tokens are passed, the resulting output is poor.
320320
321-
Round-trip invariant for full input:
322-
Untokenized source will match input source exactly
323-
324-
Round-trip invariant for limited input:
325-
# Output bytes will tokenize back to the input
326-
t1 = [tok[:2] for tok in tokenize(f.readline)]
327-
newcode = untokenize(t1)
328-
readline = BytesIO(newcode).readline
329-
t2 = [tok[:2] for tok in tokenize(readline)]
330-
assert t1 == t2
321+
The result is guaranteed to tokenize back to match the input so
322+
that the conversion is lossless and round-trips are assured.
323+
The guarantee applies only to the token type and token string as
324+
the spacing between tokens (column positions) may change.
331325
"""
332326
ut = Untokenizer()
333327
out = ut.untokenize(iterable)

0 commit comments

Comments
 (0)