@@ -146,10 +146,10 @@ inside f-strings can now be any valid Python expression including backslashes,
146
146
unicode escaped sequences, multi-line expressions, comments and strings reusing the
147
147
same quote as the containing f-string. Let's cover these in detail:
148
148
149
- * Quote reuse: in Python 3.11, reusing the same quotes as the contaning f-string
149
+ * Quote reuse: in Python 3.11, reusing the same quotes as the containing f-string
150
150
raises a :exc: `SyntaxError `, forcing the user to either use other available
151
- quotes (like using double quotes or triple quites if the f-string uses single
152
- quites ). In Python 3.12, you can now do things like this:
151
+ quotes (like using double quotes or triple quotes if the f-string uses single
152
+ quotes ). In Python 3.12, you can now do things like this:
153
153
154
154
>>> songs = [' Take me back to Eden' , ' Alkaline' , ' Ascensionism' ]
155
155
>>> f " This is the playlist: { " , " .join(songs)} "
@@ -158,7 +158,7 @@ same quote as the containing f-string. Let's cover these in detail:
158
158
Note that before this change there was no explicit limit in how f-strings can
159
159
be nested, but the fact that string quotes cannot be reused inside the
160
160
expression component of f-strings made it impossible to nest f-strings
161
- arbitrarily. In fact, this is the most nested-fstring that can be written:
161
+ arbitrarily. In fact, this is the most nested f-string that could be written:
162
162
163
163
>>> f """ { f ''' { f ' { f " { 1 + 1 } " } ' } ''' } """
164
164
'2'
@@ -1280,10 +1280,10 @@ Changes in the Python API
1280
1280
1281
1281
* The output of the :func: `tokenize.tokenize ` and :func: `tokenize.generate_tokens `
1282
1282
functions is now changed due to the changes introduced in :pep: `701 `. This
1283
- means that ``STRING `` tokens are not emited anymore for f-strings and the
1283
+ means that ``STRING `` tokens are not emitted any more for f-strings and the
1284
1284
tokens described in :pep: `701 ` are now produced instead: ``FSTRING_START ``,
1285
- ``FSRING_MIDDLE `` and ``FSTRING_END `` are now emited for f-string "string"
1286
- parts in addition to the the apropiate tokens for the tokenization in the
1285
+ ``FSRING_MIDDLE `` and ``FSTRING_END `` are now emitted for f-string "string"
1286
+ parts in addition to the appropriate tokens for the tokenization in the
1287
1287
expression components. For example for the f-string ``f"start {1+1} end" ``
1288
1288
the old version of the tokenizer emitted::
1289
1289
@@ -1301,7 +1301,7 @@ Changes in the Python API
1301
1301
1,13-1,17: FSTRING_MIDDLE ' end'
1302
1302
1,17-1,18: FSTRING_END '"'
1303
1303
1304
- Aditionally, final ``DEDENT `` tokens are now emited within the bounds of the
1304
+ Aditionally, final ``DEDENT `` tokens are now emitted within the bounds of the
1305
1305
input. This means that for a file containing 3 lines, the old version of the
1306
1306
tokenizer returned a ``DEDENT `` token in line 4 whilst the new version returns
1307
1307
the token in line 3.
0 commit comments