Coverage Report

Created: 2025-08-26 06:26

/src/cpython/Python/perf_trampoline.c
Line
Count
Source (jump to first uncovered line)
1
/*
2
3
Perf trampoline instrumentation
4
===============================
5
6
This file contains instrumentation to allow to associate
7
calls to the CPython eval loop back to the names of the Python
8
functions and filename being executed.
9
10
Many native performance profilers like the Linux perf tools are
11
only available to 'see' the C stack when sampling from the profiled
12
process. This means that if we have the following python code:
13
14
    import time
15
    def foo(n):
16
        # Some CPU intensive code
17
18
    def bar(n):
19
        foo(n)
20
21
    def baz(n):
22
        bar(n)
23
24
    baz(10000000)
25
26
A performance profiler that is only able to see native frames will
27
produce the following backtrace when sampling from foo():
28
29
    _PyEval_EvalFrameDefault -----> Evaluation frame of foo()
30
    _PyEval_Vector
31
    _PyFunction_Vectorcall
32
    PyObject_Vectorcall
33
    call_function
34
35
    _PyEval_EvalFrameDefault ------> Evaluation frame of bar()
36
    _PyEval_EvalFrame
37
    _PyEval_Vector
38
    _PyFunction_Vectorcall
39
    PyObject_Vectorcall
40
    call_function
41
42
    _PyEval_EvalFrameDefault -------> Evaluation frame of baz()
43
    _PyEval_EvalFrame
44
    _PyEval_Vector
45
    _PyFunction_Vectorcall
46
    PyObject_Vectorcall
47
    call_function
48
49
    ...
50
51
    Py_RunMain
52
53
Because the profiler is only able to see the native frames and the native
54
function that runs the evaluation loop is the same (_PyEval_EvalFrameDefault)
55
then the profiler and any reporter generated by it will not be able to
56
associate the names of the Python functions and the filenames associated with
57
those calls, rendering the results useless in the Python world.
58
59
To fix this problem, we introduce the concept of a trampoline frame. A
60
trampoline frame is a piece of code that is unique per Python code object that
61
is executed before entering the CPython eval loop. This piece of code just
62
calls the original Python evaluation function (_PyEval_EvalFrameDefault) and
63
forwards all the arguments received. In this way, when a profiler samples
64
frames from the previous example it will see;
65
66
    _PyEval_EvalFrameDefault -----> Evaluation frame of foo()
67
    [Jit compiled code 3]
68
    _PyEval_Vector
69
    _PyFunction_Vectorcall
70
    PyObject_Vectorcall
71
    call_function
72
73
    _PyEval_EvalFrameDefault ------> Evaluation frame of bar()
74
    [Jit compiled code 2]
75
    _PyEval_EvalFrame
76
    _PyEval_Vector
77
    _PyFunction_Vectorcall
78
    PyObject_Vectorcall
79
    call_function
80
81
    _PyEval_EvalFrameDefault -------> Evaluation frame of baz()
82
    [Jit compiled code 1]
83
    _PyEval_EvalFrame
84
    _PyEval_Vector
85
    _PyFunction_Vectorcall
86
    PyObject_Vectorcall
87
    call_function
88
89
    ...
90
91
    Py_RunMain
92
93
When we generate every unique copy of the trampoline (what here we called "[Jit
94
compiled code N]") we write the relationship between the compiled code and the
95
Python function that is associated with it. Every profiler requires this
96
information in a different format. For example, the Linux "perf" profiler
97
requires a file in "/tmp/perf-PID.map" (name and location not configurable)
98
with the following format:
99
100
    <compiled code address> <compiled code size> <name of the compiled code>
101
102
If this file is available when "perf" generates reports, it will automatically
103
associate every trampoline with the Python function that it is associated with
104
allowing it to generate reports that include Python information. These reports
105
then can also be filtered in a way that *only* Python information appears.
106
107
Notice that for this to work, there must be a unique copied of the trampoline
108
per Python code object even if the code in the trampoline is the same. To
109
achieve this we have a assembly template in Objects/asm_trampiline.S that is
110
compiled into the Python executable/shared library. This template generates a
111
symbol that maps the start of the assembly code and another that marks the end
112
of the assembly code for the trampoline.  Then, every time we need a unique
113
trampoline for a Python code object, we copy the assembly code into a mmaped
114
area that has executable permissions and we return the start of that area as
115
our trampoline function.
116
117
Asking for a mmap-ed memory area for trampoline is very wasteful so we
118
allocate big arenas of memory in a single mmap call, we populate the entire
119
arena with copies of the trampoline (this allows us to now have to invalidate
120
the icache for the instructions in the page) and then we return the next
121
available chunk every time someone asks for a new trampoline. We keep a linked
122
list of arenas in case the current memory arena is exhausted and another one is
123
needed.
124
125
For the best results, Python should be compiled with
126
CFLAGS="-fno-omit-frame-pointer -mno-omit-leaf-frame-pointer" as this allows
127
profilers to unwind using only the frame pointer and not on DWARF debug
128
information (note that as trampilines are dynamically generated there won't be
129
any DWARF information available for them).
130
*/
131
132
#include "Python.h"
133
#include "pycore_ceval.h"         // _PyPerf_Callbacks
134
#include "pycore_interpframe.h"   // _PyFrame_GetCode()
135
#include "pycore_runtime.h"       // _PyRuntime
136
137
138
#ifdef PY_HAVE_PERF_TRAMPOLINE
139
140
#include <fcntl.h>
141
#include <stdio.h>
142
#include <stdlib.h>
143
#include <sys/mman.h>             // mmap()
144
#include <sys/types.h>
145
#include <unistd.h>               // sysconf()
146
#include <sys/time.h>           // gettimeofday()
147
148
149
#if defined(__arm__) || defined(__arm64__) || defined(__aarch64__)
150
#define PY_HAVE_INVALIDATE_ICACHE
151
152
#if defined(__clang__) || defined(__GNUC__)
153
extern void __clear_cache(void *, void*);
154
#endif
155
156
static void invalidate_icache(char* begin, char*end) {
157
#if defined(__clang__) || defined(__GNUC__)
158
    return __clear_cache(begin, end);
159
#else
160
    return;
161
#endif
162
}
163
#endif
164
165
/* The function pointer is passed as last argument. The other three arguments
166
 * are passed in the same order as the function requires. This results in
167
 * shorter, more efficient ASM code for trampoline.
168
 */
169
typedef PyObject *(*py_evaluator)(PyThreadState *, _PyInterpreterFrame *,
170
                                  int throwflag);
171
typedef PyObject *(*py_trampoline)(PyThreadState *, _PyInterpreterFrame *, int,
172
                                   py_evaluator);
173
174
extern void *_Py_trampoline_func_start;  // Start of the template of the
175
                                         // assembly trampoline
176
extern void *
177
    _Py_trampoline_func_end;  // End of the template of the assembly trampoline
178
179
struct code_arena_st {
180
    char *start_addr;    // Start of the memory arena
181
    char *current_addr;  // Address of the current trampoline within the arena
182
    size_t size;         // Size of the memory arena
183
    size_t size_left;    // Remaining size of the memory arena
184
    size_t code_size;    // Size of the code of every trampoline in the arena
185
    struct code_arena_st
186
        *prev;  // Pointer to the arena  or NULL if this is the first arena.
187
};
188
189
typedef struct code_arena_st code_arena_t;
190
typedef struct trampoline_api_st trampoline_api_t;
191
192
enum perf_trampoline_type {
193
    PERF_TRAMPOLINE_UNSET = 0,
194
    PERF_TRAMPOLINE_TYPE_MAP = 1,
195
    PERF_TRAMPOLINE_TYPE_JITDUMP = 2,
196
};
197
198
0
#define perf_status _PyRuntime.ceval.perf.status
199
0
#define extra_code_index _PyRuntime.ceval.perf.extra_code_index
200
0
#define perf_code_arena _PyRuntime.ceval.perf.code_arena
201
0
#define trampoline_api _PyRuntime.ceval.perf.trampoline_api
202
#define perf_map_file _PyRuntime.ceval.perf.map_file
203
0
#define persist_after_fork _PyRuntime.ceval.perf.persist_after_fork
204
0
#define perf_trampoline_type _PyRuntime.ceval.perf.perf_trampoline_type
205
0
#define prev_eval_frame _PyRuntime.ceval.perf.prev_eval_frame
206
207
static void
208
perf_map_write_entry(void *state, const void *code_addr,
209
                         unsigned int code_size, PyCodeObject *co)
210
0
{
211
0
    const char *entry = "";
212
0
    if (co->co_qualname != NULL) {
213
0
        entry = PyUnicode_AsUTF8(co->co_qualname);
214
0
    }
215
0
    const char *filename = "";
216
0
    if (co->co_filename != NULL) {
217
0
        filename = PyUnicode_AsUTF8(co->co_filename);
218
0
    }
219
0
    size_t perf_map_entry_size = snprintf(NULL, 0, "py::%s:%s", entry, filename) + 1;
220
0
    char* perf_map_entry = (char*) PyMem_RawMalloc(perf_map_entry_size);
221
0
    if (perf_map_entry == NULL) {
222
0
        return;
223
0
    }
224
0
    snprintf(perf_map_entry, perf_map_entry_size, "py::%s:%s", entry, filename);
225
0
    PyUnstable_WritePerfMapEntry(code_addr, code_size, perf_map_entry);
226
0
    PyMem_RawFree(perf_map_entry);
227
0
}
228
229
static void*
230
perf_map_init_state(void)
231
0
{
232
0
    PyUnstable_PerfMapState_Init();
233
0
    trampoline_api.code_padding = 0;
234
0
    trampoline_api.code_alignment = 32;
235
0
    perf_trampoline_type = PERF_TRAMPOLINE_TYPE_MAP;
236
0
    return NULL;
237
0
}
238
239
static int
240
perf_map_free_state(void *state)
241
0
{
242
0
    PyUnstable_PerfMapState_Fini();
243
0
    return 0;
244
0
}
245
246
_PyPerf_Callbacks _Py_perfmap_callbacks = {
247
    &perf_map_init_state,
248
    &perf_map_write_entry,
249
    &perf_map_free_state,
250
};
251
252
253
0
static size_t round_up(int64_t value, int64_t multiple) {
254
0
    if (multiple == 0) {
255
        // Avoid division by zero
256
0
        return value;
257
0
    }
258
259
0
    int64_t remainder = value % multiple;
260
0
    if (remainder == 0) {
261
        // Value is already a multiple of 'multiple'
262
0
        return value;
263
0
    }
264
265
    // Calculate the difference to the next multiple
266
0
    int64_t difference = multiple - remainder;
267
268
    // Add the difference to the value
269
0
    int64_t rounded_up_value = value + difference;
270
271
0
    return rounded_up_value;
272
0
}
273
274
// TRAMPOLINE MANAGEMENT API
275
276
static int
277
new_code_arena(void)
278
0
{
279
    // non-trivial programs typically need 64 to 256 kiB.
280
0
    size_t mem_size = 4096 * 16;
281
0
    assert(mem_size % sysconf(_SC_PAGESIZE) == 0);
282
0
    char *memory =
283
0
        mmap(NULL,  // address
284
0
             mem_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS,
285
0
             -1,  // fd (not used here)
286
0
             0);  // offset (not used here)
287
0
    if (memory == MAP_FAILED) {
288
0
        PyErr_SetFromErrno(PyExc_OSError);
289
0
        PyErr_FormatUnraisable("Failed to create new mmap for perf trampoline");
290
0
        perf_status = PERF_STATUS_FAILED;
291
0
        return -1;
292
0
    }
293
0
    void *start = &_Py_trampoline_func_start;
294
0
    void *end = &_Py_trampoline_func_end;
295
0
    size_t code_size = end - start;
296
0
    size_t unaligned_size = code_size + trampoline_api.code_padding;
297
0
    size_t chunk_size = round_up(unaligned_size, trampoline_api.code_alignment);
298
0
    assert(chunk_size % trampoline_api.code_alignment == 0);
299
    // TODO: Check the effect of alignment of the code chunks. Initial investigation
300
    // showed that this has no effect on performance in x86-64 or aarch64 and the current
301
    // version has the advantage that the unwinder in GDB can unwind across JIT-ed code.
302
    //
303
    // We should check the values in the future and see if there is a
304
    // measurable performance improvement by rounding trampolines up to 32-bit
305
    // or 64-bit alignment.
306
307
0
    size_t n_copies = mem_size / chunk_size;
308
0
    for (size_t i = 0; i < n_copies; i++) {
309
0
        memcpy(memory + i * chunk_size, start, code_size * sizeof(char));
310
0
    }
311
    // Some systems may prevent us from creating executable code on the fly.
312
0
    int res = mprotect(memory, mem_size, PROT_READ | PROT_EXEC);
313
0
    if (res == -1) {
314
0
        PyErr_SetFromErrno(PyExc_OSError);
315
0
        munmap(memory, mem_size);
316
0
        PyErr_FormatUnraisable("Failed to set mmap for perf trampoline to "
317
0
                               "PROT_READ | PROT_EXEC");
318
0
        return -1;
319
0
    }
320
321
#ifdef PY_HAVE_INVALIDATE_ICACHE
322
    // Before the JIT can run a block of code that has been emitted it must invalidate
323
    // the instruction cache on some platforms like arm and aarch64.
324
    invalidate_icache(memory, memory + mem_size);
325
#endif
326
327
0
    code_arena_t *new_arena = PyMem_RawCalloc(1, sizeof(code_arena_t));
328
0
    if (new_arena == NULL) {
329
0
        PyErr_NoMemory();
330
0
        munmap(memory, mem_size);
331
0
        PyErr_FormatUnraisable("Failed to allocate new code arena struct for perf trampoline");
332
0
        return -1;
333
0
    }
334
335
0
    new_arena->start_addr = memory;
336
0
    new_arena->current_addr = memory;
337
0
    new_arena->size = mem_size;
338
0
    new_arena->size_left = mem_size;
339
0
    new_arena->code_size = code_size;
340
0
    new_arena->prev = perf_code_arena;
341
0
    perf_code_arena = new_arena;
342
0
    return 0;
343
0
}
344
345
static void
346
free_code_arenas(void)
347
0
{
348
0
    code_arena_t *cur = perf_code_arena;
349
0
    code_arena_t *prev;
350
0
    perf_code_arena = NULL;  // invalid static pointer
351
0
    while (cur) {
352
0
        munmap(cur->start_addr, cur->size);
353
0
        prev = cur->prev;
354
0
        PyMem_RawFree(cur);
355
0
        cur = prev;
356
0
    }
357
0
}
358
359
static inline py_trampoline
360
code_arena_new_code(code_arena_t *code_arena)
361
0
{
362
0
    py_trampoline trampoline = (py_trampoline)code_arena->current_addr;
363
0
    size_t total_code_size = round_up(code_arena->code_size + trampoline_api.code_padding,
364
0
                                  trampoline_api.code_alignment);
365
0
    assert(total_code_size % trampoline_api.code_alignment == 0);
366
0
    code_arena->size_left -= total_code_size;
367
0
    code_arena->current_addr += total_code_size;
368
0
    return trampoline;
369
0
}
370
371
static inline py_trampoline
372
compile_trampoline(void)
373
0
{
374
0
    size_t total_code_size = round_up(perf_code_arena->code_size + trampoline_api.code_padding, 16);
375
0
    if ((perf_code_arena == NULL) ||
376
0
        (perf_code_arena->size_left <= total_code_size)) {
377
0
        if (new_code_arena() < 0) {
378
0
            return NULL;
379
0
        }
380
0
    }
381
0
    assert(perf_code_arena->size_left <= perf_code_arena->size);
382
0
    return code_arena_new_code(perf_code_arena);
383
0
}
384
385
static PyObject *
386
py_trampoline_evaluator(PyThreadState *ts, _PyInterpreterFrame *frame,
387
                        int throw)
388
0
{
389
0
    if (perf_status == PERF_STATUS_FAILED ||
390
0
        perf_status == PERF_STATUS_NO_INIT) {
391
0
        goto default_eval;
392
0
    }
393
0
    PyCodeObject *co = _PyFrame_GetCode(frame);
394
0
    py_trampoline f = NULL;
395
0
    assert(extra_code_index != -1);
396
0
    int ret = _PyCode_GetExtra((PyObject *)co, extra_code_index, (void **)&f);
397
0
    if (ret != 0 || f == NULL) {
398
        // This is the first time we see this code object so we need
399
        // to compile a trampoline for it.
400
0
        py_trampoline new_trampoline = compile_trampoline();
401
0
        if (new_trampoline == NULL) {
402
0
            goto default_eval;
403
0
        }
404
0
        trampoline_api.write_state(trampoline_api.state, new_trampoline,
405
0
                                   perf_code_arena->code_size, co);
406
0
        _PyCode_SetExtra((PyObject *)co, extra_code_index,
407
0
                         (void *)new_trampoline);
408
0
        f = new_trampoline;
409
0
    }
410
0
    assert(f != NULL);
411
0
    return f(ts, frame, throw, prev_eval_frame != NULL ? prev_eval_frame : _PyEval_EvalFrameDefault);
412
0
default_eval:
413
    // Something failed, fall back to the default evaluator.
414
0
    if (prev_eval_frame) {
415
0
        return prev_eval_frame(ts, frame, throw);
416
0
    }
417
0
    return _PyEval_EvalFrameDefault(ts, frame, throw);
418
0
}
419
#endif  // PY_HAVE_PERF_TRAMPOLINE
420
421
int PyUnstable_PerfTrampoline_CompileCode(PyCodeObject *co)
422
0
{
423
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
424
0
    py_trampoline f = NULL;
425
0
    assert(extra_code_index != -1);
426
0
    int ret = _PyCode_GetExtra((PyObject *)co, extra_code_index, (void **)&f);
427
0
    if (ret != 0 || f == NULL) {
428
0
        py_trampoline new_trampoline = compile_trampoline();
429
0
        if (new_trampoline == NULL) {
430
0
            return 0;
431
0
        }
432
0
        trampoline_api.write_state(trampoline_api.state, new_trampoline,
433
0
                                   perf_code_arena->code_size, co);
434
0
        return _PyCode_SetExtra((PyObject *)co, extra_code_index,
435
0
                         (void *)new_trampoline);
436
0
    }
437
0
#endif // PY_HAVE_PERF_TRAMPOLINE
438
0
    return 0;
439
0
}
440
441
int
442
_PyIsPerfTrampolineActive(void)
443
0
{
444
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
445
0
    PyThreadState *tstate = _PyThreadState_GET();
446
0
    return tstate->interp->eval_frame == py_trampoline_evaluator;
447
0
#endif
448
0
    return 0;
449
0
}
450
451
void
452
_PyPerfTrampoline_GetCallbacks(_PyPerf_Callbacks *callbacks)
453
0
{
454
0
    if (callbacks == NULL) {
455
0
        return;
456
0
    }
457
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
458
0
    callbacks->init_state = trampoline_api.init_state;
459
0
    callbacks->write_state = trampoline_api.write_state;
460
0
    callbacks->free_state = trampoline_api.free_state;
461
0
#endif
462
0
    return;
463
0
}
464
465
int
466
_PyPerfTrampoline_SetCallbacks(_PyPerf_Callbacks *callbacks)
467
0
{
468
0
    if (callbacks == NULL) {
469
0
        return -1;
470
0
    }
471
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
472
0
    if (trampoline_api.state) {
473
0
        _PyPerfTrampoline_Fini();
474
0
    }
475
0
    trampoline_api.init_state = callbacks->init_state;
476
0
    trampoline_api.write_state = callbacks->write_state;
477
0
    trampoline_api.free_state = callbacks->free_state;
478
0
    trampoline_api.state = NULL;
479
0
#endif
480
0
    return 0;
481
0
}
482
483
int
484
_PyPerfTrampoline_Init(int activate)
485
0
{
486
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
487
0
    PyThreadState *tstate = _PyThreadState_GET();
488
0
    if (!activate) {
489
0
        _PyInterpreterState_SetEvalFrameFunc(tstate->interp, prev_eval_frame);
490
0
        perf_status = PERF_STATUS_NO_INIT;
491
0
    }
492
0
    else if (tstate->interp->eval_frame != py_trampoline_evaluator) {
493
0
        prev_eval_frame = _PyInterpreterState_GetEvalFrameFunc(tstate->interp);
494
0
        _PyInterpreterState_SetEvalFrameFunc(tstate->interp, py_trampoline_evaluator);
495
0
        extra_code_index = _PyEval_RequestCodeExtraIndex(NULL);
496
0
        if (extra_code_index == -1) {
497
0
            return -1;
498
0
        }
499
0
        if (trampoline_api.state == NULL && trampoline_api.init_state != NULL) {
500
0
            trampoline_api.state = trampoline_api.init_state();
501
0
        }
502
0
        if (new_code_arena() < 0) {
503
0
            return -1;
504
0
        }
505
0
        perf_status = PERF_STATUS_OK;
506
0
    }
507
0
#endif
508
0
    return 0;
509
0
}
510
511
int
512
_PyPerfTrampoline_Fini(void)
513
0
{
514
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
515
0
    if (perf_status != PERF_STATUS_OK) {
516
0
        return 0;
517
0
    }
518
0
    PyThreadState *tstate = _PyThreadState_GET();
519
0
    if (tstate->interp->eval_frame == py_trampoline_evaluator) {
520
0
        _PyInterpreterState_SetEvalFrameFunc(tstate->interp, NULL);
521
0
    }
522
0
    if (perf_status == PERF_STATUS_OK) {
523
0
        trampoline_api.free_state(trampoline_api.state);
524
0
        perf_trampoline_type = PERF_TRAMPOLINE_UNSET;
525
0
    }
526
0
    extra_code_index = -1;
527
0
    perf_status = PERF_STATUS_NO_INIT;
528
0
#endif
529
0
    return 0;
530
0
}
531
532
0
void _PyPerfTrampoline_FreeArenas(void) {
533
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
534
0
    free_code_arenas();
535
0
#endif
536
0
    return;
537
0
}
538
539
int
540
0
PyUnstable_PerfTrampoline_SetPersistAfterFork(int enable){
541
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
542
0
    persist_after_fork = enable;
543
0
    return persist_after_fork;
544
0
#endif
545
0
    return 0;
546
0
}
547
548
PyStatus
549
_PyPerfTrampoline_AfterFork_Child(void)
550
0
{
551
0
#ifdef PY_HAVE_PERF_TRAMPOLINE
552
0
    if (persist_after_fork) {
553
0
        if (perf_trampoline_type != PERF_TRAMPOLINE_TYPE_MAP) {
554
0
            return PyStatus_Error("Failed to copy perf map file as perf trampoline type is not type map.");
555
0
        }
556
0
        _PyPerfTrampoline_Fini();
557
0
        char filename[256];
558
0
        pid_t parent_pid = getppid();
559
0
        snprintf(filename, sizeof(filename), "/tmp/perf-%d.map", parent_pid);
560
0
        if (PyUnstable_CopyPerfMapFile(filename) != 0) {
561
0
            return PyStatus_Error("Failed to copy perf map file.");
562
0
        }
563
0
    } else {
564
        // Restart trampoline in file in child.
565
0
        int was_active = _PyIsPerfTrampolineActive();
566
0
        _PyPerfTrampoline_Fini();
567
0
        if (was_active) {
568
0
            _PyPerfTrampoline_Init(1);
569
0
        }
570
0
    }
571
0
#endif
572
0
    return PyStatus_Ok();
573
0
}