Fuzz introspector
For issues and ideas: https://github.com/ossf/fuzz-introspector/issues

Project functions overview

The following table shows data about each function in the project. The functions included in this table correspond to all functions that exist in the executables of the fuzzers. As such, there may be functions that are from third-party libraries.

For further technical details on the meaning of columns in the below table, please see the Glossary .

Func name Functions filename Args Function call depth Reached by Fuzzers Runtime reached by Fuzzers Combined reached by Fuzzers Fuzzers runtime hit Func lines hit % I Count BB Count Cyclomatic complexity Functions reached Reached by functions Accumulated cyclomatic complexity Undiscovered complexity

Fuzzer details

Fuzzer: fuzz_apply_template

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 1 1.14%
gold [1:9] 8 9.19%
yellow [10:29] 9 10.3%
greenyellow [30:49] 8 9.19%
lawngreen 50+ 61 70.1%
All colors 87 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
1 75 llm_chat_apply_template(llm_chat_template, std::__1::vector > const&, std::__1::basic_string , std::__1::allocator >&, bool) call site: 00075

Runtime coverage analysis

Covered functions
7
Functions that are reachable but not covered
5
Reachable functions
17
Percentage of reachable functions covered
70.59%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_apply_template.cpp 1
/src/llama.cpp/src/llama.cpp 1
/src/llama.cpp/src/llama-chat.cpp 5

Fuzzer: fuzz_grammar

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 26 19.4%
gold [1:9] 34 25.3%
yellow [10:29] 19 14.1%
greenyellow [30:49] 4 2.98%
lawngreen 50+ 51 38.0%
All colors 134 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
11 26 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00026 __cxa_allocate_exception
5 38 parse_char(char const*) call site: 00038 __cxa_allocate_exception
5 50 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00050 __cxa_allocate_exception
4 92 parse_int(char const*) call site: 00092 __cxa_allocate_exception
1 109 llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string , std::__1::allocator > const&, std::__1::vector >&, bool) call site: 00109

Runtime coverage analysis

Covered functions
16
Functions that are reachable but not covered
17
Reachable functions
49
Percentage of reachable functions covered
65.31%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_grammar.cpp 1
/src/llama.cpp/src/llama-grammar.h 2
/src/llama.cpp/src/llama-grammar.cpp 16

Fuzzer: fuzz_load_model

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 5988 98.4%
gold [1:9] 68 1.11%
yellow [10:29] 5 0.08%
greenyellow [30:49] 6 0.09%
lawngreen 50+ 15 0.24%
All colors 6082 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
5959 104 llama_model_load_from_file_impl(std::__1::basic_string , std::__1::allocator > const&, std::__1::vector , std::__1::allocator >, std::__1::allocator , std::__1::allocator > > >&, llama_model_params) call site: 00104 gguf_init_from_file
12 6 ggml_init call site: 00006 ggml_abort
10 93 ggml_backend_dev_type call site: 00093 ggml_backend_dev_backend_reg
2 6064 llama_model::~llama_model() call site: 06064 llama_free_model
1 19 ggml_init call site: 00019 ggml_log_internal
1 21 ggml_aligned_malloc call site: 00021 ggml_log_internal
1 49 ggml_cpu_init call site: 00049 clock_gettime
1 77 get_reg() call site: 00077
1 80 llama_log_internal_v(ggml_log_level, char const*, __va_list_tag*) call site: 00080 vsnprintf

Runtime coverage analysis

Covered functions
55
Functions that are reachable but not covered
889
Reachable functions
949
Percentage of reachable functions covered
6.32%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_load_model.cpp 2
/src/llama.cpp/src/llama.cpp 10
/src/llama.cpp/ggml/src/ggml.c 69
/src/llama.cpp/ggml/src/ggml-threading.cpp 2
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp 13
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 1
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 1
/src/llama.cpp/ggml/src/./ggml-impl.h 4
/src/llama.cpp/ggml/src/ggml-cpu/vec.h 2
/src/llama.cpp/ggml/src/ggml-backend.cpp 48
/src/llama.cpp/src/llama-impl.cpp 10
/src/llama.cpp/src/llama-model.cpp 27
/src/llama.cpp/src/llama-vocab.cpp 87
/src/llama.cpp/src/llama-model-loader.cpp 91
/src/llama.cpp/src/llama-arch.cpp 7
/src/llama.cpp/ggml/src/gguf.cpp 79
/src/llama.cpp/src/llama-mmap.cpp 19
/src/llama.cpp/src/llama-model-loader.h 2
/src/llama.cpp/src/llama-hparams.cpp 10
/src/llama.cpp/src/unicode.cpp 34
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/src/unicode.h 3
/src/llama.cpp/src/llama-arch.h 4
/src/llama.cpp/ggml/src/ggml-impl.h 1
/src/llama.cpp/ggml/src/ggml-alloc.c 6
/src/llama.cpp/ggml/src/ggml-quants.c 10

Fuzzer: fuzz_structured

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 5988 98.3%
gold [1:9] 58 0.95%
yellow [10:29] 6 0.09%
greenyellow [30:49] 2 0.03%
lawngreen 50+ 33 0.54%
All colors 6087 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
5959 108 llama_model_load_from_file_impl(std::__1::basic_string , std::__1::allocator > const&, std::__1::vector , std::__1::allocator >, std::__1::allocator , std::__1::allocator > > >&, llama_model_params) call site: 00108 gguf_init_from_file
12 6 ggml_init call site: 00006 ggml_abort
10 97 ggml_backend_dev_type call site: 00097 ggml_backend_dev_backend_reg
2 6068 llama_model::~llama_model() call site: 06068 llama_free_model
1 19 ggml_init call site: 00019 ggml_log_internal
1 21 ggml_aligned_malloc call site: 00021 ggml_log_internal
1 53 ggml_cpu_init call site: 00053 clock_gettime
1 81 get_reg() call site: 00081
1 84 llama_log_internal_v(ggml_log_level, char const*, __va_list_tag*) call site: 00084 vsnprintf

Runtime coverage analysis

Covered functions
55
Functions that are reachable but not covered
890
Reachable functions
950
Percentage of reachable functions covered
6.32%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_structured.cpp 2
/src/llama.cpp/src/llama.cpp 10
/src/llama.cpp/ggml/src/ggml.c 69
/src/llama.cpp/ggml/src/ggml-threading.cpp 2
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp 13
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 1
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 1
/src/llama.cpp/ggml/src/./ggml-impl.h 4
/src/llama.cpp/ggml/src/ggml-cpu/vec.h 2
/src/llama.cpp/ggml/src/ggml-backend.cpp 48
/src/llama.cpp/src/llama-impl.cpp 10
/src/llama.cpp/src/llama-model.cpp 27
/src/llama.cpp/src/llama-vocab.cpp 87
/src/llama.cpp/src/llama-model-loader.cpp 91
/src/llama.cpp/src/llama-arch.cpp 7
/src/llama.cpp/ggml/src/gguf.cpp 79
/src/llama.cpp/src/llama-mmap.cpp 19
/src/llama.cpp/src/llama-model-loader.h 2
/src/llama.cpp/src/llama-hparams.cpp 10
/src/llama.cpp/src/unicode.cpp 34
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/src/unicode.h 3
/src/llama.cpp/src/llama-arch.h 4
/src/llama.cpp/ggml/src/ggml-impl.h 1
/src/llama.cpp/ggml/src/ggml-alloc.c 6
/src/llama.cpp/ggml/src/ggml-quants.c 10

Fuzzer: fuzz_inference

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 6334 92.9%
gold [1:9] 98 1.43%
yellow [10:29] 15 0.22%
greenyellow [30:49] 5 0.07%
lawngreen 50+ 361 5.29%
All colors 6813 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
4786 1167 llama_model_load(std::__1::basic_string , std::__1::allocator > const&, std::__1::vector , std::__1::allocator >, std::__1::allocator , std::__1::allocator > > >&, llama_model&, llama_model_params&) call site: 01167 llama_supports_gpu_offload
714 6083 llama_model::~llama_model() call site: 06083 llama_new_context_with_model
390 771 llama_model::load_hparams(llama_model_loader&) call site: 00771 llama_model_rope_type
112 5954 llama_file::impl::seek(unsigned long, int) const call site: 05954 ggml_validate_row_data
79 439 GGUFMeta::GKV ::set(gguf_context const*, int, unsigned short&, llama_model_kv_override const*) call site: 00439 gguf_init_from_file
28 674 GGUFMeta::GKV ::get_kv(gguf_context const*, int) call site: 00674 gguf_find_key
16 27 LLVMFuzzerTestOneInput call site: 00027
15 658 llama_model::load_hparams(llama_model_loader&) call site: 00658 gguf_find_key
14 350 _ZN8GGUFMeta3GKVINSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEE12try_overrideIS7_EENS1_9enable_ifIXsr3std7is_sameIT_S7_EE5valueEbE4typeERS7_PK23llama_model_kv_override call site: 00350 __cxa_allocate_exception
12 519 GGUFMeta::GKV ::get_kv(gguf_context const*, int) call site: 00519 gguf_get_val_i32
11 112 ggml_backend_dev_type call site: 00112 ggml_backend_dev_backend_reg
11 748 llama_model::load_hparams(llama_model_loader&) call site: 00748 ggml_abort

Runtime coverage analysis

Covered functions
222
Functions that are reachable but not covered
903
Reachable functions
1233
Percentage of reachable functions covered
26.76%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_inference.cpp 1
/src/llama.cpp/src/llama.cpp 11
/src/llama.cpp/ggml/src/ggml.c 98
/src/llama.cpp/ggml/src/ggml-threading.cpp 2
/src/llama.cpp/common/common.h 13
/src/llama.cpp/common/common.cpp 2
/src/llama.cpp/src/llama-model.cpp 34
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp 14
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 1
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c 1
/src/llama.cpp/ggml/src/./ggml-impl.h 4
/src/llama.cpp/ggml/src/ggml-cpu/vec.h 2
/src/llama.cpp/ggml/src/ggml-backend.cpp 75
/src/llama.cpp/src/llama-impl.cpp 10
/src/llama.cpp/src/llama-vocab.cpp 87
/src/llama.cpp/src/llama-model-loader.cpp 91
/src/llama.cpp/src/llama-arch.cpp 8
/src/llama.cpp/ggml/src/gguf.cpp 79
/src/llama.cpp/src/llama-mmap.cpp 19
/src/llama.cpp/src/llama-model-loader.h 2
/src/llama.cpp/src/llama-hparams.cpp 17
/src/llama.cpp/src/unicode.cpp 34
/usr/local/bin/../include/c++/v1/stdexcept 1
/src/llama.cpp/src/unicode.h 3
/src/llama.cpp/src/llama-arch.h 4
/src/llama.cpp/ggml/src/ggml-impl.h 14
/src/llama.cpp/ggml/src/ggml-alloc.c 31
/src/llama.cpp/ggml/src/ggml-quants.c 10
/src/llama.cpp/src/llama-context.cpp 14
/src/llama.cpp/src/llama-adapter.h 2
/src/llama.cpp/src/llama-graph.h 6
/src/llama.cpp/src/llama-memory-recurrent.cpp 4
/src/llama.cpp/src/llama-memory.h 2
/src/llama.cpp/src/llama-memory-hybrid.cpp 1
/src/llama.cpp/src/llama-kv-cache.cpp 4
/src/llama.cpp/src/llama-kv-cache.h 3
/src/llama.cpp/src/llama-kv-cells.h 3
/src/llama.cpp/src/llama-kv-cache-iswa.cpp 1
/src/llama.cpp/src/llama-graph.cpp 8
/src/llama.cpp/src/llama-hparams.h 1
/src/llama.cpp/src/llama-batch.h 5
/src/llama.cpp/src/llama-batch.cpp 5
/src/llama.cpp/ggml/src/ggml-opt.cpp 2

Fuzzer: fuzz_json_to_grammar

Call tree

The calltree shows the control flow of the fuzzer. This is overlaid with coverage information to display how much of the potential code a fuzzer can reach is in fact covered at runtime. In the following there is a link to a detailed calltree visualisation as well as a bitmap showing a high-level view of the calltree. For further information about these topics please see the glossary for full calltree and calltree overview

Call tree overview bitmap:

The distribution of callsites in terms of coloring is
Color Runtime hitcount Callsite count Percentage
red 0 367 43.1%
gold [1:9] 3 0.35%
yellow [10:29] 8 0.94%
greenyellow [30:49] 4 0.47%
lawngreen 50+ 468 55.0%
All colors 850 100

Fuzz blockers

The following nodes represent call sites where fuzz blockers occur.

Amount of callsites blocked Calltree index Parent function Callsite Largest blocked function
93 472 void nlohmann::json_abi_v3_12_0::detail::external_constructor<(nlohmann::json_abi_v3_12_0::detail::value_t)6>::construct , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> >(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>&, nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::number_unsigned_t) call site: 00472 _ZN8nlohmann16json_abi_v3_12_06detail11parse_error6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS1_10position_tERKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_
39 242 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::end() call site: 00242 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvE5eraseINS0_6detail9iter_implISE_EETnNS2_9enable_ifIXoosr3std7is_sameIT_SI_EE5valuesr3std7is_sameISK_NSH_IKSE_EEEE5valueEiE4typeELi0EEESK_SK_
21 370 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRSA_SA_TnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_ call site: 00370 _ZN8nlohmann16json_abi_v3_12_06detail11parse_error6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS1_10position_tERKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_
16 392 std::__1::basic_string , std::__1::allocator > nlohmann::json_abi_v3_12_0::detail::concat , std::__1::allocator >, char const (&) [23], std::__1::basic_string , std::__1::allocator > >(char const (&) [23], std::__1::basic_string , std::__1::allocator >&&) call site: 00392 _ZN8nlohmann16json_abi_v3_12_06detail12out_of_range6createIPNS0_10basic_jsonINSt3__13mapENS5_6vectorENS5_12basic_stringIcNS5_11char_traitsIcEENS5_9allocatorIcEEEEblmdSB_NS0_14adl_serializerENS7_IhNSB_IhEEEEvEETnNS5_9enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKSD_SK_
15 218 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::json_value::json_value(std::__1::basic_string , std::__1::allocator > const&) call site: 00218
15 345 nlohmann::json_abi_v3_12_0::detail::parse_error::parse_error(int, unsigned long, char const*) call site: 00345 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRSA_SA_TnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_
12 801 nlohmann::json_abi_v3_12_0::byte_container_with_subtype > > const& nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::get_ref_impl > > const&, nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const>(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const&) call site: 00801 _ZN8nlohmann16json_abi_v3_12_014adl_serializerINS0_27byte_container_with_subtypeINSt3__16vectorIhNS3_9allocatorIhEEEEEEvE7to_jsonINS0_10basic_jsonINS0_11ordered_mapES4_NS3_12basic_stringIcNS3_11char_traitsIcEENS5_IcEEEEblmdS5_S1_S7_vEERKS8_EEDTcmclL_ZNS0_7to_jsonEEfp_clsr3stdE7forwardIT0_Efp0_EEcvv_EERT_OSL_
10 183 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::operator=(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>) call site: 00183 __cxa_allocate_exception
10 411 _ZN8nlohmann16json_abi_v3_12_06detail12out_of_range6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_ call site: 00411 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRddTnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_
8 172 nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::data::~data() call site: 00172
8 204 nlohmann::json_abi_v3_12_0::detail::out_of_range::out_of_range(int, char const*) call site: 00204 __cxa_throw
8 440 void nlohmann::json_abi_v3_12_0::detail::external_constructor<(nlohmann::json_abi_v3_12_0::detail::value_t)4>::construct , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> >(nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>&, nlohmann::json_abi_v3_12_0::basic_json , std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::boolean_t) call site: 00440 _ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRllTnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_

Runtime coverage analysis

Covered functions
498
Functions that are reachable but not covered
159
Reachable functions
673
Percentage of reachable functions covered
76.37%
NB: The sum of covered functions and functions that are reachable but not covered need not be equal to Reachable functions . This is because the reachability analysis is an approximation and thus at runtime some functions may be covered that are not included in the reachability analysis. This is a limitation of our static analysis capabilities.
Function name source code lines source lines hit percentage hit

Files reached

filename functions hit
/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp 1
/src/llama.cpp/vendor/nlohmann/json.hpp 375
/usr/local/bin/../include/c++/v1/__exception/exception.h 2
/src/llama.cpp/common/json-schema-to-grammar.cpp 6
/src/llama.cpp/common/common.cpp 1
/src/llama.cpp/common/json-schema-to-grammar.h 1

Analyses and suggestions

Optimal target analysis

Remaining optimal interesting functions

The following table shows a list of functions that are optimal targets. Optimal targets are identified by finding the functions that in combination, yield a high code coverage.

Func name Functions filename Arg count Args Function depth hitcount instr count bb count cyclomatic complexity Reachable functions Incoming references total cyclomatic complexity Unreached complexity
ggml_backend_cpu_graph_compute(ggml_backend*,ggml_cgraph*) /src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp 2 ['N/A', 'N/A'] 15 0 59 9 4 1712 0 7423 7283
SchemaConverter::visit(nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>const&,std::__1::basic_string ,std::__1::allocator >const&) /src/llama.cpp/common/json-schema-to-grammar.cpp 4 ['N/A', 'N/A', 'N/A', 'N/A'] 18 0 2479 632 505 507 5 2821 2515
ggml_quantize_chunk /src/llama.cpp/ggml/src/ggml.c 7 ['int', 'N/A', 'N/A', 'size_t', 'size_t', 'size_t', 'N/A'] 4 0 274 41 2 99 0 2265 2220
common_init_from_params(common_params&) /src/llama.cpp/common/common.cpp 2 ['N/A', 'N/A'] 14 0 992 246 207 1402 0 13147 1627
llama_opt_epoch /src/llama.cpp/src/llama-context.cpp 7 ['N/A', 'N/A', 'N/A', 'N/A', 'size_t', 'N/A', 'N/A'] 13 0 12 3 2 390 0 2249 521
llm_build_granite_hybrid::llm_build_granite_hybrid(llama_modelconst&,llm_graph_paramsconst&) /src/llama.cpp/src/models/granite-hybrid.cpp 3 ['N/A', 'N/A', 'N/A'] 9 0 226 51 40 284 0 772 454
llama_model_save_to_file /src/llama.cpp/src/llama.cpp 2 ['N/A', 'N/A'] 8 0 53 12 9 214 0 543 366

Implementing fuzzers that target the above functions will improve reachability such that it becomes:

Functions statically reachable by fuzzers
61.0%
2727 / 4486
Cyclomatic complexity statically reachable by fuzzers
68.0%
30079 / 44112

All functions overview

If you implement fuzzers for these functions, the status of all functions in the project will be:

Func name Functions filename Args Function call depth Reached by Fuzzers Runtime reached by Fuzzers Combined reached by Fuzzers Fuzzers runtime hit Func lines hit % I Count BB Count Cyclomatic complexity Functions reached Reached by functions Accumulated cyclomatic complexity Undiscovered complexity

Fuzz engine guidance

This sections provides heuristics that can be used as input to a fuzz engine when running a given fuzz target. The current focus is on providing input that is usable by libFuzzer.

/src/llama.cpp/fuzzers/fuzz_apply_template.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llm_chat_apply_template(llm_chat_template, std::__1::vector > const&, std::__1::basic_string, std::__1::allocator >&, bool)']

/src/llama.cpp/fuzzers/fuzz_grammar.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)', 'parse_char(char const*)', 'llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)', 'parse_int(char const*)', 'llama_grammar_parser::parse_sequence(char const*, std::__1::basic_string, std::__1::allocator > const&, std::__1::vector >&, bool)']

/src/llama.cpp/fuzzers/fuzz_load_model.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llama_model_load_from_file_impl(std::__1::basic_string, std::__1::allocator > const&, std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > >&, llama_model_params)', 'ggml_init', 'ggml_backend_dev_type', 'llama_model::~llama_model()', 'ggml_aligned_malloc', 'ggml_cpu_init', 'get_reg()', 'llama_log_internal_v(ggml_log_level, char const*, __va_list_tag*)']

/src/llama.cpp/fuzzers/fuzz_structured.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llama_model_load_from_file_impl(std::__1::basic_string, std::__1::allocator > const&, std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > >&, llama_model_params)', 'ggml_init', 'ggml_backend_dev_type', 'llama_model::~llama_model()', 'ggml_aligned_malloc', 'ggml_cpu_init', 'get_reg()', 'llama_log_internal_v(ggml_log_level, char const*, __va_list_tag*)']

/src/llama.cpp/fuzzers/fuzz_inference.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['llama_model_load(std::__1::basic_string, std::__1::allocator > const&, std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > >&, llama_model&, llama_model_params&)', 'llama_model::~llama_model()', 'llama_model::load_hparams(llama_model_loader&)', 'llama_file::impl::seek(unsigned long, int) const', 'GGUFMeta::GKV::set(gguf_context const*, int, unsigned short&, llama_model_kv_override const*)', 'GGUFMeta::GKV::get_kv(gguf_context const*, int)', 'LLVMFuzzerTestOneInput', 'llama_model::load_hparams(llama_model_loader&)', '_ZN8GGUFMeta3GKVINSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEEE12try_overrideIS7_EENS1_9enable_ifIXsr3std7is_sameIT_S7_EE5valueEbE4typeERS7_PK23llama_model_kv_override', 'GGUFMeta::GKV::get_kv(gguf_context const*, int)']

/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp

Dictionary

Use this with the libFuzzer -dict=DICT.file flag


Fuzzer function priority

Use one of these functions as input to libfuzzer with flag: -focus_function name

-focus_function=['void nlohmann::json_abi_v3_12_0::detail::external_constructor<(nlohmann::json_abi_v3_12_0::detail::value_t)6>::construct, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> >(nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>&, nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::number_unsigned_t)', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::end()', '_ZN8nlohmann16json_abi_v3_12_010basic_jsonINSt3__13mapENS2_6vectorENS2_12basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEEblmdS8_NS0_14adl_serializerENS4_IhNS8_IhEEEEvEC2IRSA_SA_TnNS2_9enable_ifIXaantsr6detail13is_basic_jsonIT0_EE5valuesr6detail18is_compatible_typeISE_SI_EE5valueEiE4typeELi0EEEOT_', 'std::__1::basic_string, std::__1::allocator > nlohmann::json_abi_v3_12_0::detail::concat, std::__1::allocator >, char const (&) [23], std::__1::basic_string, std::__1::allocator > >(char const (&) [23], std::__1::basic_string, std::__1::allocator >&&)', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::json_value::json_value(std::__1::basic_string, std::__1::allocator > const&)', 'nlohmann::json_abi_v3_12_0::detail::parse_error::parse_error(int, unsigned long, char const*)', 'nlohmann::json_abi_v3_12_0::byte_container_with_subtype > > const& nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::get_ref_impl > > const&, nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const>(nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void> const&)', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::operator=(nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>)', '_ZN8nlohmann16json_abi_v3_12_06detail12out_of_range6createIDnTnNSt3__19enable_ifIXsr21is_basic_json_contextIT_EE5valueEiE4typeELi0EEES2_iRKNS4_12basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEES6_', 'nlohmann::json_abi_v3_12_0::basic_json, std::__1::allocator >, bool, long, unsigned long, double, std::__1::allocator, nlohmann::json_abi_v3_12_0::adl_serializer, std::__1::vector >, void>::data::~data()']

Runtime coverage analysis

This section shows analysis of runtime coverage data.

For futher technical details on how this section is generated, please see the Glossary .

Complex functions with low coverage

Func name Function total lines Lines covered at runtime percentage covered Reached by fuzzers
llama_model_loader::llama_model_loader(std::__1::basic_string ,std::__1::allocator >const&,std::__1::vector ,std::__1::allocator >,std::__1::allocator ,std::__1::allocator >>>&,bool,bool,llama_model_kv_overrideconst*,llama_model_tensor_buft_overrideconst*) 185 87 47.02% ['fuzz_structured', 'fuzz_inference', 'fuzz_load_model']
llama_model_ftype_name(llama_ftype) 41 9 21.95% ['fuzz_structured', 'fuzz_inference', 'fuzz_load_model']
llama_model::load_hparams(llama_model_loader&) 1528 72 4.712% ['fuzz_structured', 'fuzz_inference', 'fuzz_load_model']
nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>::json_value::json_value(nlohmann::json_abi_v3_12_0::detail::value_t) 60 17 28.33% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::parser ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>,nlohmann::json_abi_v3_12_0::detail::iterator_input_adapter >::parse(bool,nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>&) 40 16 40.0% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>::json_value::json_value(nlohmann::json_abi_v3_12_0::detail::value_t) 60 9 15.0% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::iter_impl ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::set_begin() 33 14 42.42% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::iter_impl ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::operator*()const 33 14 42.42% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::serializer ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::dump(nlohmann::json_abi_v3_12_0::basic_json ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>const&,bool,bool,unsignedint,unsignedint) 215 99 46.04% ['fuzz_json_to_grammar']
nlohmann::json_abi_v3_12_0::detail::serializer ,std::__1::allocator >,bool,long,unsignedlong,double,std::__1::allocator,nlohmann::json_abi_v3_12_0::adl_serializer,std::__1::vector >,void>>::dump_escaped(std::__1::basic_string ,std::__1::allocator >const&,bool) 188 94 50.0% ['fuzz_json_to_grammar']

Files and Directories in report

This section shows which files and directories are considered in this report. The main reason for showing this is fuzz introspector may include more code in the reasoning than is desired. This section helps identify if too many files/directories are included, e.g. third party code, which may be irrelevant for the threat model. In the event too much is included, fuzz introspector supports a configuration file that can exclude data from the report. See the following link for more information on how to create a config file: link

Files in report

Source file Reached by Covered by
[] []
/src/llama.cpp/vendor/nlohmann/json.hpp ['fuzz_json_to_grammar'] ['fuzz_json_to_grammar']
/usr/local/bin/../include/c++/v1/stdexcept ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/chameleon.cpp [] []
/src/llama.cpp/src/llama-chat.cpp ['fuzz_apply_template'] ['fuzz_apply_template']
/src/llama.cpp/src/models/neo-bert.cpp [] []
/src/llama.cpp/ggml/src/ggml-backend-reg.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/quants.c [] []
/src/llama.cpp/src/models/nemotron-h.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/llama-model-loader.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/models/exaone4.cpp [] []
/src/llama.cpp/src/llama-kv-cache.cpp ['fuzz_inference'] []
/src/llama.cpp/src/models/cogvlm.cpp [] []
/src/llama.cpp/src/llama-memory-recurrent.cpp ['fuzz_inference'] []
/src/llama.cpp/src/llama-io.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/simd-mappings.h [] []
/src/llama.cpp/src/llama-model.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/llama-hparams.h ['fuzz_inference'] []
/src/llama.cpp/src/models/bert.cpp [] []
/src/llama.cpp/src/models/rnd1.cpp [] []
/src/llama.cpp/src/models/plamo2.cpp [] []
/src/llama.cpp/src/llama.cpp ['fuzz_apply_template', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_apply_template', 'fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/llama-memory.cpp [] []
/src/llama.cpp/src/llama-model.h [] []
/src/llama.cpp/ggml/src/./ggml-impl.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/fuzzers/fuzz_structured.cpp ['fuzz_structured'] ['fuzz_structured']
/src/llama.cpp/src/llama-arch.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/fuzzers/fuzz_apply_template.cpp ['fuzz_apply_template'] ['fuzz_apply_template']
/src/llama.cpp/src/models/../llama-batch.h [] []
/src/llama.cpp/src/models/llama-iswa.cpp [] []
/src/llama.cpp/src/llama-model-saver.cpp [] []
/src/llama.cpp/src/models/deepseek.cpp [] []
/src/llama.cpp/src/models/granite.cpp [] []
/src/llama.cpp/vendor/nlohmann/json_fwd.hpp [] []
/src/llama.cpp/ggml/src/ggml-quants.c ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/unicode.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/models/gemma3-iswa.cpp [] []
/src/llama.cpp/ggml/src/ggml-alloc.c ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/llama-mmap.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/llama-memory-hybrid.cpp ['fuzz_inference'] []
/src/llama.cpp/src/models/openai-moe-iswa.cpp [] []
/src/llama.cpp/src/llama-batch.cpp ['fuzz_inference'] []
/src/llama.cpp/src/models/llada.cpp [] []
/src/llama.cpp/src/models/ernie4-5-moe.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/common.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/unary-ops.cpp [] []
/src/llama.cpp/src/models/stablelm.cpp [] []
/src/llama.cpp/common/../vendor/nlohmann/json_fwd.hpp [] []
/src/llama.cpp/src/llama-impl.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/qwen3.cpp [] []
/src/llama.cpp/fuzzers/fuzz_json_to_grammar.cpp ['fuzz_json_to_grammar'] ['fuzz_json_to_grammar']
/src/llama.cpp/fuzzers/fuzz_inference.cpp ['fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/llama-kv-cache.h ['fuzz_inference'] ['fuzz_inference']
/usr/local/bin/../include/c++/v1/istream [] []
/src/llama.cpp/src/models/gpt2.cpp [] []
/src/llama.cpp/src/llama-context.h [] []
/src/llama.cpp/ggml/src/ggml-backend.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/common/common.cpp ['fuzz_inference', 'fuzz_json_to_grammar'] ['fuzz_inference', 'fuzz_json_to_grammar']
/src/llama.cpp/src/llama-adapter.h ['fuzz_inference'] []
/src/llama.cpp/src/unicode-data.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/sgemm.cpp [] []
/src/llama.cpp/src/models/qwen3moe.cpp [] []
/src/llama.cpp/src/models/command-r.cpp [] []
/src/llama.cpp/src/models/glm4.cpp [] []
/src/llama.cpp/ggml/src/ggml-threading.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/cohere2-iswa.cpp [] []
/src/llama.cpp/src/models/plamo.cpp [] []
/src/llama.cpp/common/json-schema-to-grammar.cpp ['fuzz_json_to_grammar'] ['fuzz_json_to_grammar']
/src/llama.cpp/src/models/baichuan.cpp [] []
/src/llama.cpp/src/models/wavtokenizer-dec.cpp [] []
/src/llama.cpp/src/models/deci.cpp [] []
/src/llama.cpp/src/models/smollm3.cpp [] []
/src/llama.cpp/fuzzers/fuzz_grammar.cpp ['fuzz_grammar'] ['fuzz_grammar']
/src/llama.cpp/src/models/minimax-m2.cpp [] []
/src/llama.cpp/common/log.cpp [] []
/src/llama.cpp/src/models/bloom.cpp [] []
/src/llama.cpp/ggml/src/ggml-opt.cpp ['fuzz_inference'] []
/src/llama.cpp/ggml/src/ggml-cpu/traits.h [] []
/src/llama.cpp/src/llama-adapter.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/binary-ops.cpp [] []
/src/llama.cpp/src/models/gemma-embedding.cpp [] []
/src/llama.cpp/src/models/falcon.cpp [] []
/src/llama.cpp/src/models/t5-dec.cpp [] []
/src/llama.cpp/src/models/hunyuan-dense.cpp [] []
/src/llama.cpp/src/models/falcon-h1.cpp [] []
/usr/local/bin/../include/c++/v1/__exception/exception_ptr.h [] []
/src/llama.cpp/ggml/src/../include/ggml-cpp.h [] []
/src/llama.cpp/src/models/pangu-embedded.cpp [] []
/src/llama.cpp/src/models/rwkv6qwen2.cpp [] []
/src/llama.cpp/src/models/t5-enc.cpp [] []
/src/llama.cpp/src/models/rwkv6.cpp [] []
/src/llama.cpp/src/llama-memory-hybrid.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/repack.cpp [] []
/src/llama.cpp/src/models/lfm2.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/traits.cpp [] []
/src/llama.cpp/src/models/phi2.cpp [] []
/src/llama.cpp/src/models/afmoe.cpp [] []
/src/llama.cpp/src/../include/llama-cpp.h [] []
/src/llama.cpp/ggml/src/ggml-impl.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/xverse.cpp [] []
/src/llama.cpp/src/models/mpt.cpp [] []
/src/llama.cpp/src/models/jamba.cpp [] []
/src/llama.cpp/src/models/olmo.cpp [] []
/src/llama.cpp/src/models/ernie4-5.cpp [] []
/src/llama.cpp/src/models/hunyuan-moe.cpp [] []
/src/llama.cpp/src/models/dbrx.cpp [] []
/src/llama.cpp/src/models/grovemoe.cpp [] []
/src/llama.cpp/src/models/rwkv7.cpp [] []
/src/llama.cpp/src/models/dream.cpp [] []
/src/llama.cpp/src/models/seed-oss.cpp [] []
/src/llama.cpp/common/common.h ['fuzz_inference'] []
/src/llama.cpp/src/models/llada-moe.cpp [] []
/src/llama.cpp/src/llama-io.cpp [] []
/src/llama.cpp/src/llama-graph.h ['fuzz_inference'] []
/src/llama.cpp/src/models/codeshell.cpp [] []
/src/llama.cpp/common/json-schema-to-grammar.h ['fuzz_json_to_grammar'] []
/src/llama.cpp/src/models/../llama-graph.h [] []
/src/llama.cpp/src/unicode.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/arctic.cpp [] []
/src/llama.cpp/src/models/deepseek2.cpp [] []
/src/llama.cpp/src/models/qwen3vl.cpp [] []
/src/llama.cpp/src/llama-impl.h [] []
/src/llama.cpp/src/models/grok.cpp [] []
/src/llama.cpp/src/models/arcee.cpp [] []
/src/llama.cpp/src/models/qwen2moe.cpp [] []
/src/llama.cpp/src/llama-grammar.cpp ['fuzz_grammar'] ['fuzz_grammar']
/src/llama.cpp/src/models/phi3.cpp [] []
/src/llama.cpp/src/llama-kv-cache-iswa.cpp ['fuzz_inference'] []
/src/llama.cpp/fuzzers/fuzz_load_model.cpp ['fuzz_load_model'] ['fuzz_load_model']
/src/llama.cpp/src/models/nemotron.cpp [] []
/src/llama.cpp/src/models/exaone.cpp [] []
/src/llama.cpp/src/models/olmoe.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/repack.cpp [] []
/src/llama.cpp/src/llama-model-loader.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/models/plm.cpp [] []
/src/llama.cpp/src/llama-memory.h ['fuzz_inference'] []
/src/llama.cpp/src/models/refact.cpp [] []
/src/llama.cpp/src/models/mamba.cpp [] []
/src/llama.cpp/src/llama-kv-cache-iswa.h [] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/rwkv6-base.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ops.cpp [] []
/src/llama.cpp/src/llama-vocab.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/models/bailingmoe.cpp [] []
/src/llama.cpp/src/models/openelm.cpp [] []
/src/llama.cpp/src/llama-graph.cpp ['fuzz_inference'] []
/src/llama.cpp/src/llama-context.cpp ['fuzz_inference'] []
/src/llama.cpp/ggml/src/ggml-cpu/vec.cpp [] []
/src/llama.cpp/src/llama-grammar.h ['fuzz_grammar'] []
/usr/local/bin/../include/c++/v1/__exception/exception.h ['fuzz_json_to_grammar'] []
/src/llama.cpp/src/models/jais.cpp [] []
/src/llama.cpp/common/../vendor/nlohmann/json.hpp [] ['fuzz_json_to_grammar']
/src/llama.cpp/src/models/apertus.cpp [] []
/src/llama.cpp/src/models/qwen3vl-moe.cpp [] []
/src/llama.cpp/src/models/gptneox.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/ggml/src/ggml.c ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference']
/src/llama.cpp/src/llama-hparams.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/granite-hybrid.cpp [] []
/src/llama.cpp/src/models/internlm2.cpp [] []
/src/llama.cpp/src/models/bitnet.cpp [] []
/src/llama.cpp/src/llama-vocab.h [] []
/src/llama.cpp/src/models/qwen2vl.cpp [] []
/src/llama.cpp/ggml/src/ggml-cpu/quants.c [] []
/src/llama.cpp/src/models/arwkv7.cpp [] []
/src/llama.cpp/src/models/glm4-moe.cpp [] []
/src/llama.cpp/src/models/llama.cpp [] []
/src/llama.cpp/src/models/orion.cpp [] []
/src/llama.cpp/src/models/qwen2.cpp [] []
/src/llama.cpp/src/models/smallthinker.cpp [] []
/src/llama.cpp/src/models/starcoder.cpp [] []
/src/llama.cpp/src/models/minicpm3.cpp [] []
/src/llama.cpp/src/llama-arch.h ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] []
/src/llama.cpp/src/models/gemma3n-iswa.cpp [] []
/src/llama.cpp/src/models/bailingmoe2.cpp [] []
/src/llama.cpp/src/models/qwen.cpp [] []
/src/llama.cpp/src/models/models.h [] []
/src/llama.cpp/src/llama-batch.h ['fuzz_inference'] []
/src/llama.cpp/src/llama-kv-cells.h ['fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/ggml/src/gguf.cpp ['fuzz_load_model', 'fuzz_structured', 'fuzz_inference'] ['fuzz_inference']
/src/llama.cpp/src/models/dots1.cpp [] []
/usr/local/bin/../include/c++/v1/string [] []
/src/llama.cpp/src/models/gemma.cpp [] []
/src/llama.cpp/src/llama-memory-recurrent.h [] []
/src/llama.cpp/src/models/olmo2.cpp [] []
/src/llama.cpp/src/models/chatglm.cpp [] []
/src/llama.cpp/src/models/gemma2-iswa.cpp [] []
/src/llama.cpp/src/models/starcoder2.cpp [] []
/src/llama.cpp/src/models/rwkv7-base.cpp [] []
/src/llama.cpp/src/models/graph-context-mamba.cpp [] []

Directories in report

Directory
/src/llama.cpp/fuzzers/
/src//src/models/
/src/llama.cpp/src/../include/
/src//src/
/src/llama.cpp/vendor/nlohmann/
/src/llama.cpp/ggml/src/ggml-cpu/arch/x86/
/src/llama.cpp/common/
/usr/local/bin/../include/c++/v1/
/src/llama.cpp/ggml/src/
/src/llama.cpp/ggml/src/ggml-cpu/
/src/llama.cpp/src/
/src/llama.cpp/ggml/src/ggml-cpu/llamafile/
/usr/local/bin/../include/c++/v1/__exception/
/src/llama.cpp/ggml/src/./
/src/llama.cpp/common/../vendor/nlohmann/
/src/llama.cpp/ggml/src/../include/
/src/llama.cpp/src/models/
/src/llama.cpp/src/models/../