Fuzz introspector
For issues and ideas: https://github.com/ossf/fuzz-introspector/issues

Project functions overview

The following table shows data about each function in the project. The functions included in this table correspond to all functions that exist in the executables of the fuzzers. As such, there may be functions that are from third-party libraries.

For further technical details on the meaning of columns in the below table, please see the Glossary .

Func name Functions filename Args Function call depth Reached by Fuzzers Runtime reached by Fuzzers Combined reached by Fuzzers Fuzzers runtime hit Func lines hit % I Count BB Count Cyclomatic complexity Functions reached Reached by functions Accumulated cyclomatic complexity Undiscovered complexity

Fuzzer details

Fuzzer: fuzz_apply_template

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 2 2.29%
gold [1:9] 4 4.59%
yellow [10:29] 12 13.7%
greenyellow [30:49] 10 11.4%
lawngreen 50+ 59 67.8%
All colors 87 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
1 55 llm_chat_detect_template(std::__1::basic_string , std::__1::allocator > const&) call site: 00055
1 75 llm_chat_apply_template(llm_chat_template, std::__1::vector > const&, std::__1::basic_string , std::__1::allocator >&, bool) call site: 00075

Runtime coverage analysis

Covered functions
7
Functions that are reachable but not covered
5
Reachable functions
17
Percentage of reachable functions covered
70.59%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_apply_template.cpp 1
/src/llama.cpp/src/llama.cpp 1
/src/llama.cpp/src/llama-chat.cpp 5

Fuzzer: fuzz_grammar

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 402 76.8%
gold [1:9] 42 8.03%
yellow [10:29] 4 0.76%
greenyellow [30:49] 12 2.29%
lawngreen 50+ 63 12.0%
All colors 523 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
378 72 parse_token(llama_vocab const*, char const*) call site: 00072 ggml_abort
5 38 parse_char(char const*) call site: 00038 __cxa_allocate_exception
5 50 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00050 __cxa_allocate_exception
4 33 parse_char(char const*) call site: 00033 __cxa_allocate_exception
4 468 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00468 __cxa_allocate_exception
4 480 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00480 __cxa_allocate_exception
1 26 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00026
1 498 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00498

Runtime coverage analysis

Covered functions
19
Functions that are reachable but not covered
233
Reachable functions
275
Percentage of reachable functions covered
15.27%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_grammar.cpp 1
/src/llama.cpp/src/llama-grammar.h 2
/src/llama.cpp/src/llama-grammar.cpp 17
/src/llama.cpp/src/llama-vocab.cpp 65
/src/llama.cpp/ggml/src/ggml.c 1
/src/llama.cpp/src/llama-impl.cpp 3
/src/llama.cpp/src/unicode.cpp 32
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/src/unicode.h 3

Fuzzer: fuzz_load_model

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 8012 98.9%
gold [1:9] 63 0.77%
yellow [10:29] 4 0.04%
greenyellow [30:49] 6 0.07%
lawngreen 50+ 13 0.16%
All colors 8098 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
7974 104 llama_model_load_from_file_impl(std::__1::basic_string , std::__1::allocator > const&, std::__1::vector , std::__1::allocator >, std::__1::allocator , std::__1::allocator > > >&, llama_model_params) call site: 00104 gguf_init_from_file
12 6 ggml_init call site: 00006 ggml_abort
10 93 ggml_backend_dev_type call site: 00093 ggml_backend_dev_backend_reg
4 36 LLVMFuzzerTestOneInput call site: 00036 llama_model_load_from_file
2 8080 llama_model::~llama_model() call site: 08080 llama_free_model
1 19 ggml_init call site: 00019 ggml_log_internal
1 21 ggml_aligned_malloc call site: 00021 ggml_log_internal
1 24 ggml_init call site: 00024 ggml_free
1 49 ggml_cpu_init call site: 00049 clock_gettime
1 55 ggml_compute_fp16_to_fp32 call site: 00055 fp32_from_bits
1 59 ggml_cpu_init call site: 00059 fp32_to_bits
1 77 get_reg() call site: 00077

Runtime coverage analysis

Covered functions
55
Functions that are reachable but not covered
914
Reachable functions
974
Percentage of reachable functions covered
6.16%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_load_model.cpp 2
/src/llama.cpp/src/llama.cpp 10
/src/llama.cpp/ggml/src/ggml.c 69
/src/llama.cpp/ggml/src/ggml-threading.cpp 2
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp 13
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 1
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 1
/src/llama.cpp/ggml/src/./ggml-impl.h 4
/src/llama.cpp/ggml/src/ggml-cpu/vec.h 2
/src/llama.cpp/ggml/src/ggml-backend.cpp 48
/src/llama.cpp/src/llama-impl.cpp 10
/src/llama.cpp/src/llama-model.cpp 28
/src/llama.cpp/src/llama-vocab.cpp 87
/src/llama.cpp/src/llama-model-loader.cpp 92
/src/llama.cpp/src/llama-arch.cpp 9
/src/llama.cpp/ggml/src/gguf.cpp 79
/src/llama.cpp/src/llama-mmap.cpp 24
/src/llama.cpp/src/llama-model-loader.h 2
/src/llama.cpp/src/llama-hparams.cpp 11
/src/llama.cpp/src/unicode.cpp 34
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/src/unicode.h 3
/src/llama.cpp/src/llama-arch.h 5
/src/llama.cpp/ggml/src/ggml-impl.h 1
/src/llama.cpp/ggml/src/ggml-alloc.c 7
/src/llama.cpp/ggml/src/ggml-quants.c 10

Fuzzer: fuzz_structured

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 8012 98.8%
gold [1:9] 53 0.65%
yellow [10:29] 6 0.07%
greenyellow [30:49] 16 0.19%
lawngreen 50+ 16 0.19%
All colors 8103 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
7974 108 llama_model_load_from_file_impl(std::__1::basic_string , std::__1::allocator > const&, std::__1::vector , std::__1::allocator >, std::__1::allocator , std::__1::allocator > > >&, llama_model_params) call site: 00108 gguf_init_from_file
12 6 ggml_init call site: 00006 ggml_abort
10 97 ggml_backend_dev_type call site: 00097 ggml_backend_dev_backend_reg
4 40 LLVMFuzzerTestOneInput call site: 00040 llama_model_load_from_file
2 8084 llama_model::~llama_model() call site: 08084 llama_free_model
1 19 ggml_init call site: 00019 ggml_log_internal
1 21 ggml_aligned_malloc call site: 00021 ggml_log_internal
1 24 ggml_init call site: 00024 ggml_free
1 53 ggml_cpu_init call site: 00053 clock_gettime
1 59 ggml_compute_fp16_to_fp32 call site: 00059 fp32_from_bits
1 63 ggml_cpu_init call site: 00063 fp32_to_bits
1 81 get_reg() call site: 00081

Runtime coverage analysis

Covered functions
55
Functions that are reachable but not covered
915
Reachable functions
975
Percentage of reachable functions covered
6.15%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_structured.cpp 2
/src/llama.cpp/src/llama.cpp 10
/src/llama.cpp/ggml/src/ggml.c 69
/src/llama.cpp/ggml/src/ggml-threading.cpp 2
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp 13
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 1
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 1
/src/llama.cpp/ggml/src/./ggml-impl.h 4
/src/llama.cpp/ggml/src/ggml-cpu/vec.h 2
/src/llama.cpp/ggml/src/ggml-backend.cpp 48
/src/llama.cpp/src/llama-impl.cpp 10
/src/llama.cpp/src/llama-model.cpp 28
/src/llama.cpp/src/llama-vocab.cpp 87
/src/llama.cpp/src/llama-model-loader.cpp 92
/src/llama.cpp/src/llama-arch.cpp 9
/src/llama.cpp/ggml/src/gguf.cpp 79
/src/llama.cpp/src/llama-mmap.cpp 24
/src/llama.cpp/src/llama-model-loader.h 2
/src/llama.cpp/src/llama-hparams.cpp 11
/src/llama.cpp/src/unicode.cpp 34
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/src/unicode.h 3
/src/llama.cpp/src/llama-arch.h 5
/src/llama.cpp/ggml/src/ggml-impl.h 1
/src/llama.cpp/ggml/src/ggml-alloc.c 7
/src/llama.cpp/ggml/src/ggml-quants.c 10

Fuzzer: fuzz_inference

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 8418 95.0%
gold [1:9] 101 1.14%
yellow [10:29] 9 0.10%
greenyellow [30:49] 3 0.03%
lawngreen 50+ 325 3.66%
All colors 8856 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
6213 1869 llama_model_load(std::__1::basic_string , std::__1::allocator > const&, std::__1::vector , std::__1::allocator >, std::__1::allocator , std::__1::allocator > > >&, llama_model&, llama_model_params&) call site: 01869 llama_supports_gpu_offload
1092 776 llama_model::load_hparams(llama_model_loader&) call site: 00776 llama_model_rope_type
741 8099 llama_model::~llama_model() call site: 08099 llama_new_context_with_model
78 442 GGUFMeta::GKV ::get_kv(gguf_context const*, int) call site: 00442 gguf_init_from_file
30 677 GGUFMeta::GKV ::get_kv(gguf_context const*, int) call site: 00677 gguf_find_key
16 27 LLVMFuzzerTestOneInput call site: 00027
15 661 llama_model::load_hparams(llama_model_loader&) call site: 00661 gguf_find_key
14 352 _ZN8GGUFMeta3GKVINSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEE12try_overrideIS7_EENS1_9enable_ifIXsr3std7is_sameIT_S7_EE5valueEbE4typeERS7_PK23llama_model_kv_override call site: 00352 __cxa_allocate_exception
12 521 GGUFMeta::GKV ::get_kv(gguf_context const*, int) call site: 00521 gguf_get_val_i32
11 112 ggml_backend_dev_type call site: 00112 ggml_backend_dev_backend_reg
11 753 llama_model::load_hparams(llama_model_loader&) call site: 00753 ggml_abort
10 710 llama_model::load_hparams(llama_model_loader&) call site: 00710 _ZN8GGUFMeta3GKVIbE12try_overrideIbEENSt3__19enable_ifIXsr3std7is_sameIT_bEE5valueEbE4typeERS5_PK23llama_model_kv_override

Runtime coverage analysis

Covered functions
211
Functions that are reachable but not covered
949
Reachable functions
1268
Percentage of reachable functions covered
25.16%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_inference.cpp 1
/src/llama.cpp/src/llama.cpp 11
/src/llama.cpp/ggml/src/ggml.c 98
/src/llama.cpp/ggml/src/ggml-threading.cpp 2
/src/llama.cpp/common/common.h 13
/src/llama.cpp/common/common.cpp 2
/src/llama.cpp/src/llama-model.cpp 36
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp 14
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 1
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 1
/src/llama.cpp/ggml/src/./ggml-impl.h 4
/src/llama.cpp/ggml/src/ggml-cpu/vec.h 2
/src/llama.cpp/ggml/src/ggml-backend.cpp 76
/src/llama.cpp/src/llama-impl.cpp 10
/src/llama.cpp/src/llama-vocab.cpp 87
/src/llama.cpp/src/llama-model-loader.cpp 92
/src/llama.cpp/src/llama-arch.cpp 10
/src/llama.cpp/ggml/src/gguf.cpp 79
/src/llama.cpp/src/llama-mmap.cpp 24
/src/llama.cpp/src/llama-model-loader.h 2
/src/llama.cpp/src/llama-hparams.cpp 18
/src/llama.cpp/src/unicode.cpp 34
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/src/unicode.h 3
/src/llama.cpp/src/llama-arch.h 5
/src/llama.cpp/ggml/src/ggml-impl.h 14
/src/llama.cpp/ggml/src/ggml-alloc.c 34
/src/llama.cpp/ggml/src/ggml-quants.c 10
/src/llama.cpp/src/llama-context.cpp 16
/src/llama.cpp/src/llama-adapter.h 2
/src/llama.cpp/src/llama-graph.h 6
/src/llama.cpp/src/llama-memory-recurrent.cpp 4
/src/llama.cpp/src/llama-memory.h 2
/src/llama.cpp/src/llama-memory-hybrid.cpp 1
/src/llama.cpp/src/llama-kv-cache.cpp 4
/src/llama.cpp/src/llama-kv-cache.h 3
/src/llama.cpp/src/llama-kv-cells.h 3
/src/llama.cpp/src/llama-kv-cache-iswa.cpp 1
/src/llama.cpp/src/llama-graph.cpp 8
/src/llama.cpp/src/llama-hparams.h 1
/src/llama.cpp/src/llama-batch.h 5
/src/llama.cpp/src/llama-batch.cpp 5
/src/llama.cpp/ggml/src/ggml-opt.cpp 2

Fuzzer: fuzz_json_to_grammar

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 368 43.2%
gold [1:9] 3 0.35%
yellow [10:29] 8 0.94%
greenyellow [30:49] 3 0.35%
lawngreen 50+ 469 55.1%
All colors 851 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
93 472 void nlohmann::json_abi_v3_12_0::detail::external_constructor<(nlohmann::json_abi_v3_12_0::detail::value_t)6>::construct , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> >(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>&, nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::number_unsigned_t) call site: 00472 _ZN8nlohmann16json_abi_v3_12_06detail11parse_error6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS1_10position_tERKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_
39 242 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::end() call site: 00242 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvE5eraseINS0_6detail9iter_implISE_EETnNS2_9enable_ifIXoosr3std7is_sameIT_SI_EE5valuesr3std7is_sameISK_NSH_IKSE_EEEE5valueEiE4typeELi0EEESK_SK_
21 370 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRSA_SA_TnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_ call site: 00370 _ZN8nlohmann16json_abi_v3_12_06detail11parse_error6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS1_10position_tERKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_
16 392 std::__1::basic_string , std::__1::allocator > nlohmann::json_abi_v3_12_0::detail::concat , std::__1::allocator >, char const (&) [23], std::__1::basic_string , std::__1::allocator > >(char const (&) [23], std::__1::basic_string , std::__1::allocator >&&) call site: 00392 _ZN8nlohmann16json_abi_v3_12_06detail12out_of_range6createIPNS0_10basic_jsonINSt3__13mapENS5_6vectorENS5_12basic_stringIcNS5_11char_traitsIcEENS5_9allocatorIcEEEEblmdSB_NS0_14adl_serializerENS7_IhNSB_IhEEEEvEETnNS5_9enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKSD_SK_
15 218 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::json_value::json_value(std::__1::basic_string , std::__1::allocator > const&) call site: 00218
15 345 nlohmann::json_abi_v3_12_0::detail::parse_error::parse_error(int, unsigned long, char const*) call site: 00345 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRSA_SA_TnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_
12 801 nlohmann::json_abi_v3_12_0::byte_container_with_subtype > > const& nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::get_ref_impl > > const&, nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const>(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const&) call site: 00801 _ZN8nlohmann16json_abi_v3_12_014adl_serializerINS0_27byte_container_with_subtypeINSt3__16vectorIhNS3_9allocatorIhEEEEEEvE7to_jsonINS0_10basic_jsonINS0_11ordered_mapES4_NS3_12basic_stringIcNS3_11char_traitsIcEENS5_IcEEEEblmdS5_S1_S7_vEERKS8_EEDTcmclL_ZNS0_7to_jsonEEfp_clsr3stdE7forwardIT0_Efp0_EEcvv_EERT_OSL_
10 183 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::operator=(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>) call site: 00183 __cxa_allocate_exception
10 411 _ZN8nlohmann16json_abi_v3_12_06detail12out_of_range6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_ call site: 00411 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRddTnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_
8 172 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::data::~data() call site: 00172
8 204 nlohmann::json_abi_v3_12_0::detail::out_of_range::out_of_range(int, char const*) call site: 00204 __cxa_throw
8 440 void nlohmann::json_abi_v3_12_0::detail::external_constructor<(nlohmann::json_abi_v3_12_0::detail::value_t)4>::construct , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> >(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>&, nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::boolean_t) call site: 00440 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRllTnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_

Runtime coverage analysis

Covered functions
511
Functions that are reachable but not covered
160
Reachable functions
675
Percentage of reachable functions covered
76.3%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp 1
/src/llama.cpp/vendor/nlohmann/json.hpp 375
/usr/local/bin/../include/c++/v1/__exception/exception.h 2
/src/llama.cpp/common/json-schema-to-grammar.cpp 6
/src/llama.cpp/common/common.cpp 1
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/common/json-schema-to-grammar.h 1

Analyses and suggestions

Optimal target analysis

Remaining optimal interesting functions

The following table shows a list of functions that are optimal targets. Optimal targets are identified by finding the functions that in combination, yield a high code coverage.

Func name Functions filename Arg count Args Function depth hitcount instr count bb count cyclomatic complexity Reachable functions Incoming references total cyclomatic complexity Unreached complexity
ggml_backend_cpu_graph_compute(ggml_backend*,ggml_cgraph*) /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 2 ['N/A', 'N/A'] 15 0 59 9 4 1720 0 7469 7327
common_init_from_params(common_params&) /src/llama.cpp/common/common.cpp 2 ['N/A', 'N/A'] 19 0 700 171 151 1589 0 19207 3169
common_schema_converter::visit(nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>const&,std::__1::basic_string ,std::__1::allocator >const&) /src/llama.cpp/common/json-schema-to-grammar.cpp 4 ['N/A', 'N/A', 'N/A', 'N/A'] 18 0 2479 632 505 507 5 2821 2515
ggml_quantize_chunk /src/llama.cpp/ggml/src/ggml.c 7 ['int', 'N/A', 'N/A', 'size_t', 'size_t', 'size_t', 'N/A'] 4 0 274 41 2 99 0 2265 2220
llm_build_qwen3next::llm_build_qwen3next(llama_modelconst&,llm_graph_paramsconst&) /src/llama.cpp/src/models/qwen3next.cpp 3 ['N/A', 'N/A', 'N/A'] 12 0 266 62 33 305 0 839 538
llama_opt_epoch /src/llama.cpp/src/llama-context.cpp 7 ['N/A', 'N/A', 'N/A', 'N/A', 'size_t', 'N/A', 'N/A'] 14 0 12 3 2 394 0 2292 493
llama_model_save_to_file /src/llama.cpp/src/llama.cpp 2 ['N/A', 'N/A'] 8 0 53 12 9 212 0 541 364

Implementing fuzzers that target the above functions will improve reachability such that it becomes:

Functions statically reachable by fuzzers
59.0%
2834 / 4804
Cyclomatic complexity statically reachable by fuzzers
70.0%
35956 / 51437

All functions overview

If you implement fuzzers for these functions, the status of all functions in the project will be:

Func name Functions filename Args Function call depth Reached by Fuzzers Runtime reached by Fuzzers Combined reached by Fuzzers Fuzzers runtime hit Func lines hit % I Count BB Count Cyclomatic complexity Functions reached Reached by functions Accumulated cyclomatic complexity Undiscovered complexity

Fuzz engine guidance

This sections provides heuristics that can be used as input to a fuzz engine when running a given fuzz target. The current focus is on providing input that is usable by libFuzzer.

/src/llama.cpp/fuzzers/fuzz_apply_template.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llm_chat_detect_template(std::__1::basic_string, std::__1::allocator > const&)', 'llm_chat_apply_template(llm_chat_template, std::__1::vector > const&, std::__1::basic_string, std::__1::allocator >&, bool)']

/src/llama.cpp/fuzzers/fuzz_grammar.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['parse_token(llama_vocab const*, char const*)', 'parse_char(char const*)', 'llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)', 'parse_char(char const*)', 'llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)', 'llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)', 'llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)', 'llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)']

/src/llama.cpp/fuzzers/fuzz_load_model.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llama_model_load_from_file_impl(std::__1::basic_string, std::__1::allocator > const&, std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > >&, llama_model_params)', 'ggml_init', 'ggml_backend_dev_type', 'LLVMFuzzerTestOneInput', 'llama_model::~llama_model()', 'ggml_aligned_malloc', 'ggml_cpu_init', 'ggml_compute_fp16_to_fp32']

/src/llama.cpp/fuzzers/fuzz_structured.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llama_model_load_from_file_impl(std::__1::basic_string, std::__1::allocator > const&, std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > >&, llama_model_params)', 'ggml_init', 'ggml_backend_dev_type', 'LLVMFuzzerTestOneInput', 'llama_model::~llama_model()', 'ggml_aligned_malloc', 'ggml_cpu_init', 'ggml_compute_fp16_to_fp32']

/src/llama.cpp/fuzzers/fuzz_inference.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llama_model_load(std::__1::basic_string, std::__1::allocator > const&, std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > >&, llama_model&, llama_model_params&)', 'llama_model::load_hparams(llama_model_loader&)', 'llama_model::~llama_model()', 'GGUFMeta::GKV::get_kv(gguf_context const*, int)', 'GGUFMeta::GKV::get_kv(gguf_context const*, int)', 'LLVMFuzzerTestOneInput', 'llama_model::load_hparams(llama_model_loader&)', '_ZN8GGUFMeta3GKVINSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEE12try_overrideIS7_EENS1_9enable_ifIXsr3std7is_sameIT_S7_EE5valueEbE4typeERS7_PK23llama_model_kv_override', 'GGUFMeta::GKV::get_kv(gguf_context const*, int)', 'ggml_backend_dev_type']

/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['void nlohmann::json_abi_v3_12_0::detail::external_constructor<(nlohmann::json_abi_v3_12_0::detail::value_t)6>::construct, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> >(nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>&, nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::number_unsigned_t)', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::end()', '_ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRSA_SA_TnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_', 'std::__1::basic_string, std::__1::allocator > nlohmann::json_abi_v3_12_0::detail::concat, std::__1::allocator >, char const (&) [23], std::__1::basic_string, std::__1::allocator > >(char const (&) [23], std::__1::basic_string, std::__1::allocator >&&)', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::json_value::json_value(std::__1::basic_string, std::__1::allocator > const&)', 'nlohmann::json_abi_v3_12_0::detail::parse_error::parse_error(int, unsigned long, char const*)', 'nlohmann::json_abi_v3_12_0::byte_container_with_subtype > > const& nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::get_ref_impl > > const&, nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const>(nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const&)', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::operator=(nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>)', '_ZN8nlohmann16json_abi_v3_12_06detail12out_of_range6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::data::~data()']

Runtime coverage analysis

This section shows analysis of runtime coverage data.

For futher technical details on how this section is generated, please see the Glossary .

Complex functions with low coverage

Func name Function total lines Lines covered at runtime percentage covered Reached by fuzzers
parse_token(llama_vocabconst*,charconst*) 38 19 50.0% ['fuzz_grammar']
llama_model_loader::llama_model_loader(std::__1::basic_string ,std::__1::allocator >const&,std::__1::vector ,std::__1::allocator >,std::__1::allocator ,std::__1::allocator >>>&,bool,bool,bool,llama_model_kv_overrideconst*,llama_model_tensor_buft_overrideconst*) 186 88 47.31% ['fuzz_structured', 'fuzz_load_model', 'fuzz_inference']
llama_model_ftype_name(llama_ftype) 41 9 21.95% ['fuzz_structured', 'fuzz_load_model', 'fuzz_inference']
llama_model::load_hparams(llama_model_loader&) 1641 72 4.387% ['fuzz_structured', 'fuzz_load_model', 'fuzz_inference']
nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>::json_value::json_value(nlohmann::json_abi_v3_12_0::detail::value_t) 60 17 28.33% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::parser ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>,nlohmann::json_abi_v3_12_0::detail::iterator_input_adapter >::parse(bool,nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>&) 40 16 40.0% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>::json_value::json_value(nlohmann::json_abi_v3_12_0::detail::value_t) 60 9 15.0% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::iter_impl ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::set_begin() 33 14 42.42% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::iter_impl ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::operator*()const 33 14 42.42% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::serializer ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::dump(nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>const&,bool,bool,unsignedint,unsignedint) 215 99 46.04% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::serializer ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::dump_escaped(std::__1::basic_string ,std::__1::allocator >const&,bool) 188 94 50.0% ['fuzz_json_to_grammar']

Files and Directories in report

This section shows which files and directories are considered in this report. The main reason for showing this is fuzz introspector may include more code in the reasoning than is desired. This section helps identify if too many files/directories are included, e.g. third party code, which may be irrelevant for the threat model. In the event too much is included, fuzz introspector supports a configuration file that can exclude data from the report. See the following link for more information on how to create a config file: link

Files in report

Source file Reached by Covered by
[] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/bailingmoe2.cpp [] []
/src/llama.cpp/ggml/src/ggml-backend.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/mistral3.cpp [] []
/src/llama.cpp/src/unicode-data.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/repack.cpp [] []
/src/llama.cpp/src/models/arcee.cpp [] []
/src/llama.cpp/fuzzers/fuzz_structured.cpp ['fuzz_structured'] ['fuzz_structured']
/src/llama.cpp/src/models/smollm3.cpp [] []
/src/llama.cpp/src/llama.cpp ['fuzz_apply_template', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_apply_template', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/ggml/src/ggml-cpu/ops.cpp [] []
/src/llama.cpp/src/llama-context.cpp ['fuzz_inference'] []
/src/llama.cpp/src/models/bert.cpp [] []
/src/llama.cpp/src/models/ernie4-5.cpp [] []
/src/llama.cpp/common/json-schema-to-grammar.h ['fuzz_json_to_grammar'] []
/src/llama.cpp/src/models/wavtokenizer-dec.cpp [] []
/src/llama.cpp/src/models/grovemoe.cpp [] []
/src/llama.cpp/src/models/stablelm.cpp [] []
/usr/local/bin/../include/c++/v1/__exception/exception_ptr.h [] []
/src/llama.cpp/src/models/grok.cpp [] []
/src/llama.cpp/src/models/plamo2.cpp [] []
/usr/local/bin/../include/c++/v1/string [] []
/src/llama.cpp/src/models/xverse.cpp [] []
/src/llama.cpp/src/models/codeshell.cpp [] []
/src/llama.cpp/src/llama-batch.h ['fuzz_inference'] []
/src/llama.cpp/src/models/mimo2-iswa.cpp [] []
/src/llama.cpp/ggml/src/ggml.c ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/common/common.cpp ['fuzz_inference', 'fuzz_json_to_grammar'] ['fuzz_inference', 'fuzz_json_to_grammar']
/src/llama.cpp/src/models/openai-moe-iswa.cpp [] []
/src/llama.cpp/src/llama-model-loader.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/models/llama.cpp [] []
/src/llama.cpp/common/../vendor/nlohmann/json_fwd.hpp [] []
/src/llama.cpp/ggml/src/ggml-opt.cpp ['fuzz_inference'] []
/src/llama.cpp/src/models/qwen3.cpp [] []
/src/llama.cpp/src/llama-model.h [] []
/src/llama.cpp/src/models/hunyuan-moe.cpp [] []
/src/llama.cpp/src/llama-grammar.cpp ['fuzz_grammar'] ['fuzz_grammar']
/src/llama.cpp/src/models/gemma3.cpp [] []
/src/llama.cpp/src/models/olmoe.cpp [] []
/src/llama.cpp/src/models/qwen3moe.cpp [] []
/src/llama.cpp/src/models/rwkv6-base.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/common.h [] []
/src/llama.cpp/src/unicode.h ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/rwkv6qwen2.cpp [] []
/src/llama.cpp/ggml/src/ggml-alloc.c ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/cohere2-iswa.cpp [] []
/src/llama.cpp/src/llama-io.h [] []
/src/llama.cpp/src/models/gemma-embedding.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/traits.cpp [] []
/src/llama.cpp/src/llama-memory-recurrent.h [] []
/src/llama.cpp/src/models/qwen3vl-moe.cpp [] []
/src/llama.cpp/src/models/rwkv6.cpp [] []
/src/llama.cpp/src/models/deepseek2.cpp [] []
/src/llama.cpp/src/models/falcon-h1.cpp [] []
/src/llama.cpp/src/models/jais.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/sgemm.cpp [] []
/src/llama.cpp/src/models/t5-dec.cpp [] []
/src/llama.cpp/src/models/bloom.cpp [] []
/src/llama.cpp/src/models/qwen3vl.cpp [] []
/src/llama.cpp/src/models/jamba.cpp [] []
/src/llama.cpp/src/llama-adapter.h ['fuzz_inference'] []
/src/llama.cpp/src/models/gptneox.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/unary-ops.cpp [] []
/src/llama.cpp/src/models/internlm2.cpp [] []
/src/llama.cpp/src/models/lfm2.cpp [] []
/src/llama.cpp/src/models/cogvlm.cpp [] []
/src/llama.cpp/src/llama-memory-recurrent.cpp ['fuzz_inference'] []
/src/llama.cpp/ggml/src/ggml-impl.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/dbrx.cpp [] []
/src/llama.cpp/src/unicode.cpp ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_grammar', 'fuzz_inference']
/src/llama.cpp/ggml/src/../include/ggml-cpp.h [] []
/src/llama.cpp/src/models/llada-moe.cpp [] []
/src/llama.cpp/ggml/src/gguf.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/ggml/src/ggml-cpu/traits.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h [] []
/src/llama.cpp/src/models/qwen2vl.cpp [] []
/src/llama.cpp/common/../vendor/nlohmann/json.hpp [] ['fuzz_json_to_grammar']
/src/llama.cpp/src/models/dots1.cpp [] []
/src/llama.cpp/src/models/modern-bert.cpp [] []
/src/llama.cpp/src/models/smallthinker.cpp [] []
/src/llama.cpp/src/llama-grammar.h ['fuzz_grammar'] ['fuzz_grammar']
/src/llama.cpp/src/models/olmo2.cpp [] []
/src/llama.cpp/src/models/plm.cpp [] []
/src/llama.cpp/src/llama-kv-cache.h ['fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/llama-graph.cpp ['fuzz_inference'] []
/src/llama.cpp/common/json-schema-to-grammar.cpp ['fuzz_json_to_grammar'] ['fuzz_json_to_grammar']
/src/llama.cpp/src/llama-mmap.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/nemotron-h.cpp [] []
/src/llama.cpp/src/models/rwkv7-base.cpp [] []
/src/llama.cpp/src/llama-adapter.cpp [] []
/src/llama.cpp/vendor/nlohmann/json_fwd.hpp [] []
/src/llama.cpp/src/models/models.h [] []
/src/llama.cpp/src/models/glm4.cpp [] []
/src/llama.cpp/src/llama-impl.cpp ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/phi3.cpp [] []
/src/llama.cpp/ggml/src/./ggml-impl.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/gemma2-iswa.cpp [] []
/src/llama.cpp/src/llama-vocab.h [] []
/src/llama.cpp/src/llama-memory.cpp [] []
/src/llama.cpp/src/models/deci.cpp [] []
/src/llama.cpp/src/models/t5-enc.cpp [] []
/src/llama.cpp/fuzzers/fuzz_load_model.cpp ['fuzz_load_model'] ['fuzz_load_model']
/src/llama.cpp/src/llama-model-loader.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/llama-memory.h ['fuzz_inference'] []
/src/llama.cpp/src/models/qwen2moe.cpp [] []
/src/llama.cpp/common/sampling.h [] []
/src/llama.cpp/src/models/baichuan.cpp [] []
/src/llama.cpp/src/models/minicpm3.cpp [] []
/src/llama.cpp/fuzzers/fuzz_grammar.cpp ['fuzz_grammar'] ['fuzz_grammar']
/src/llama.cpp/src/models/arctic.cpp [] []
/src/llama.cpp/common/log.cpp [] []
/src/llama.cpp/src/models/arwkv7.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/quants.c [] []
/src/llama.cpp/src/llama-sampling.cpp [] []
/src/llama.cpp/src/llama-kv-cache-iswa.cpp ['fuzz_inference'] []
/src/llama.cpp/src/../include/llama-cpp.h [] []
/src/llama.cpp/src/llama-arch.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/llama-model.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/phi2.cpp [] []
/usr/local/bin/../include/c++/v1/istream [] []
/src/llama.cpp/src/models/llama-iswa.cpp [] []
/src/llama.cpp/src/models/command-r.cpp [] []
/src/llama.cpp/src/models/qwen2.cpp [] []
/src/llama.cpp/src/llama-context.h [] []
/src/llama.cpp/src/llama-hparams.h ['fuzz_inference'] []
/src/llama.cpp/src/llama-graph.h ['fuzz_inference'] []
/src/llama.cpp/fuzzers/fuzz_apply_template.cpp ['fuzz_apply_template'] ['fuzz_apply_template']
/src/llama.cpp/vendor/nlohmann/json.hpp ['fuzz_json_to_grammar'] ['fuzz_json_to_grammar']
/src/llama.cpp/src/models/bailingmoe.cpp [] []
/src/llama.cpp/src/models/seed-oss.cpp [] []
/src/llama.cpp/src/llama-sampling.h [] []
/src/llama.cpp/src/llama-kv-cache.cpp ['fuzz_inference'] []
/src/llama.cpp/src/llama-memory-hybrid.h [] []
/src/llama.cpp/src/models/olmo.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/llama-vocab.cpp ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/ggml/src/ggml-threading.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/ggml/src/ggml-quants.c ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/common/common.h ['fuzz_inference'] []
/src/llama.cpp/src/models/starcoder.cpp [] []
/src/llama.cpp/src/models/gemma.cpp [] []
/src/llama.cpp/src/models/starcoder2.cpp [] []
/src/llama.cpp/src/models/chameleon.cpp [] []
/usr/local/bin/../include/c++/v1/stdexcept ['fuzz_grammar', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference', 'fuzz_json_to_grammar'] []
/src/llama.cpp/src/llama-chat.cpp ['fuzz_apply_template'] ['fuzz_apply_template']
/src/llama.cpp/src/llama-kv-cells.h ['fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/gpt2.cpp [] []
/src/llama.cpp/src/models/granite.cpp [] []
/src/llama.cpp/src/models/rwkv7.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/binary-ops.cpp [] []
/src/llama.cpp/src/models/dream.cpp [] []
/src/llama.cpp/fuzzers/fuzz_inference.cpp ['fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/models/apertus.cpp [] []
/src/llama.cpp/src/models/orion.cpp [] []
/src/llama.cpp/src/llama-kv-cache-iswa.h [] []
/src/llama.cpp/src/llama-hparams.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/glm4-moe.cpp [] []
/src/llama.cpp/src/models/neo-bert.cpp [] []
/src/llama.cpp/src/models/nemotron.cpp [] []
/src/llama.cpp/src/models/mamba.cpp [] []
/src/llama.cpp/src/models/granite-hybrid.cpp [] []
/src/llama.cpp/src/llama-memory-hybrid.cpp ['fuzz_inference'] []
/src/llama.cpp/src/models/pangu-embedded.cpp [] []
/src/llama.cpp/src/models/hunyuan-dense.cpp [] []
/src/llama.cpp/src/llama-model-saver.cpp [] []
/src/llama.cpp/src/models/exaone.cpp [] []
/src/llama.cpp/src/models/bitnet.cpp [] []
/src/llama.cpp/src/models/mpt.cpp [] []
/src/llama.cpp/src/models/../llama-graph.h [] []
/src/llama.cpp/src/llama-arch.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/graph-context-mamba.cpp [] []
/src/llama.cpp/src/models/qwen.cpp [] []
/src/llama.cpp/src/models/exaone4.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.cpp [] []
/src/llama.cpp/src/models/llada.cpp [] []
/src/llama.cpp/src/models/minimax-m2.cpp [] []
/src/llama.cpp/src/models/chatglm.cpp [] []
/src/llama.cpp/src/models/refact.cpp [] []
/src/llama.cpp/src/models/openelm.cpp [] []
/src/llama.cpp/src/llama-io.cpp [] []
/usr/local/bin/../include/c++/v1/__exception/exception.h ['fuzz_json_to_grammar'] []
/src/llama.cpp/src/llama-batch.cpp ['fuzz_inference'] []
/src/llama.cpp/src/models/rnd1.cpp [] []
/src/llama.cpp/src/models/plamo.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/quants.c [] []
/src/llama.cpp/common/sampling.cpp [] []
/src/llama.cpp/src/models/ernie4-5-moe.cpp [] []
/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp ['fuzz_json_to_grammar'] ['fuzz_json_to_grammar']
/src/llama.cpp/src/models/afmoe.cpp [] []
/src/llama.cpp/src/models/deepseek.cpp [] []
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/gemma3n-iswa.cpp [] []
/src/llama.cpp/src/models/qwen3next.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/repack.cpp [] []
/src/llama.cpp/src/models/falcon.cpp [] []

Directories in report

Directory
/src/llama.cpp/common/../vendor/nlohmann/
/src/llama.cpp/vendor/nlohmann/
/src/llama.cpp/ggml/src/
/src/llama.cpp/src/../include/
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/
/src/llama.cpp/fuzzers/
/usr/local/bin/../include/c++/v1/
/src/llama.cpp/src/models/
/src/llama.cpp/ggml/src/./
/src/llama.cpp/src/models/../
/src/llama.cpp/ggml/src/../include/
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/
/usr/local/bin/../include/c++/v1/__exception/
/src/llama.cpp/ggml/src/ggml-cpu/
/src//src/
/src/llama.cpp/src/
/src//src/models/
/src/llama.cpp/common/