Skip to content

Commit f9aac4a

Browse files
[3.12] gh-128519: Align the docstring of untokenize() to match the docs (GH-128521) (#128532)
(cherry picked from commit aef52ca) Co-authored-by: Tomas R <[email protected]>
1 parent 4016be2 commit f9aac4a

File tree

1 file changed

+4
-10
lines changed

1 file changed

+4
-10
lines changed

Lib/tokenize.py

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -320,16 +320,10 @@ def untokenize(iterable):
320320
with at least two elements, a token number and token value. If
321321
only two tokens are passed, the resulting output is poor.
322322
323-
Round-trip invariant for full input:
324-
Untokenized source will match input source exactly
325-
326-
Round-trip invariant for limited input:
327-
# Output bytes will tokenize back to the input
328-
t1 = [tok[:2] for tok in tokenize(f.readline)]
329-
newcode = untokenize(t1)
330-
readline = BytesIO(newcode).readline
331-
t2 = [tok[:2] for tok in tokenize(readline)]
332-
assert t1 == t2
323+
The result is guaranteed to tokenize back to match the input so
324+
that the conversion is lossless and round-trips are assured.
325+
The guarantee applies only to the token type and token string as
326+
the spacing between tokens (column positions) may change.
333327
"""
334328
ut = Untokenizer()
335329
out = ut.untokenize(iterable)

0 commit comments

Comments
 (0)